Different Errors Running 2 Crawls on Effectively the Same Setup
-
Our developers are moving away from utilising robots.txt files due to security risks, so e have been in the process of removing them from sites. However we, and our clients still want to run Moz crawl reports as they can highlight useful information.
The two sites in question sit on the same server with the same settings (in fact running on the same Magento install). We do not have a robots.txt files present (they 404), and as per Chiaryn's response here https://moz.com/community/q/without-robots-txt-no-crawling this should work fine?
However for www.iconiclights.co.uk we got: 902 : Network errors prevented crawler from contacting server for page.
While for www.valuelights.co.uk we got: 612 : Page banned by error response for robots.txt.
These crawls were both run recently, and there was no robots.txt present. Not to mention, they are on the same setup/server etc as mentioned. Now, we have just tested this, by uploading a blank robots.txt file to see if it changed anything - but we get exactly the same errors.
I have had a look, but can't find anything that really matches this on here - help would really be appreciated!
Thanks!
-
Hey there! Tawny from the Customer Support team here!
This sounds like a juicy issue, and one I'd love to dive in and help you with! Unfortunately, without being able to take a look at your campaigns and account directly, it's tough to provide specific support for these issues.
That said, if you write in to help@moz.com and give us the details of what you're seeing - basically exactly what's in this question - we should be able to help investigate for you.
-
Having no Robots.txt, or a blank one, is perfectly fine (though honestly its no more a security risk than your Sitemap.xml). But your current issue is that both of your sites are returning 403 status codes at crawlers while people are still able to land on your pages. This has nothing to do with the Robots.txt file being changed or removed; just an odd coincidence. This most likely is an issue in htaccess file.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Cannot crawl website with redirect intalled on subdomain url
Hi! I want to crawl this website : http://www.car-moderne.ch. I tried a got back the crawl just for that one url (not for all the pages of the website). This single line cvs says that the status of the http://www.car-moderne.ch is 200, but in fact it is a redirect 301 to http://www.car-moderne.ch/fr where the live home page is (actually the Moz bar sees the 301, not the 200 as the single-lined crawl does). How can I proceed in this case (a 301 redirect being installed on the subdomain url) to still be able to have a full-fledged juicy cvs with all the broken links, duplicate content, etc. Thank you for your help! Pascal Hämmerli
Moz Bar | | Ethos_Digital0 -
Moz crawl issues: All pages keep resolving to our "cookies not enabled" page
Upon running the Moz Pro site crawler, I noticed that I received quite a bit of duplicate titles along with 302 redirects (which is our site creating a temporary 302 to our "cookies not enabled" page). How would I get around the crawler being redirected to this page? I've never ran across this issue before, despite using the crawler with sites that use the same framework as the one thats affected. Any ideas?
Moz Bar | | responsivelabs0 -
Way has the number of pages crawled plummeted?
Why has the number of pages crawled for our campaign plummeted in Moz Analytics – down to 729 from over 10k? Don't see any issues in Google Analytics with crawling our site.
Moz Bar | | EyeglassesGuy0 -
Duplicate content errors
Hi I am getting some errors for duplicate content errors in my crawl report for some of our products www.....com/brand/productname1.html www.....com/section/productname1.html www.....com/productname1.html we have canonical in the header for all three pages rel="canonical" href="www....com/productname1.html" />
Moz Bar | | phes0 -
Crawl Diagnostics - nofollow - reducing duplicate pages
Hi I'm looking at a crawl diagnostic report, I can see I have many duplicate pages, the reason for this is that when a brand filter is applied to a page. IE
Moz Bar | | chameleondm
www.mysite.com/mycategory - lets say this is the product listing page
www.mysite.com/category/mybrand - and this is the same page but with a brand filter applied
www.mysite.com/category/myotherbrand - and this is the same page but with a different brand filter applied I had intially appendeded the meta title, description and keywords with some extra content if a brand filter was applied, because the page on the whole does have different content. IE I would have a custom meta information, H1 tag and products on that page just for that specific brand.
However I am wondering if these two pages are really just competing with each other as lots of the content will be the same. Should I scrap that approach and use either nofollow on the brand filter link, or simply use a canonical. Thanks, James1 -
Blocked Production Site from Search Engines - How to get it Crawled by Moz Crawler
I have an 'under development' site hosted, (which is an exact replica of live site as working on to add new functionalities & modules) - but its password protected, excluded from robots.txt (Disallow) & also marked noindex on all pages in the index - so that Googlebot & other Search Engines can not crawl the site At present the development work is almost 95% completed., Now - feel like to crawl the site through SEOMOZ Roger Bot - to know the errors and all indexed urls by Rogerbot. What's the best way to get Moz Bot crawl the site - but simultaneously continue it blocking its access to Search Engines I have gone through - https://support.google.com/webmasters/answer/93708?hl=en, it says a) Save it in a password-protected directory. Googlebot and other spiders won't be able to access the content- But this way Moz will also not be able to crawl the site b) Use a robots.txt to control access to files and directories on your server - However it also says - It's important to note that even if you use a robots.txt file to block spiders from crawling content on your site, Google could discover it in other ways and add it to our index. c) Use a noindex meta tag to prevent content from appearing in our search results - It also says that a link to the page can still appear in their search results. Because we have to crawl your page in order to see the noindex tag, there's a small chance that Googlebot won't see and respect the noindex meta tag Password Protected thus seems the best way to continue blocking. However, continuing with it will also block Moz bot to crawl the site. Any suggestions Thanks
Moz Bar | | Modi0