Rogerbot does not catch all existing 4XX Errors
-
Hi I experienced that Rogerbot after a new Crawl presents me new 4XX Errors, so why doesn't he tell me all at once?
I have a small static site and had 9 crawls ago 10 4XX Errors, so I tried to fix them all.
The next crawl Rogerbot fount still 5 Errors so I thought that I did not fix them all... but this happened now many times so that I checked before the latest crawl if I really fixed all the errors 101%.Today, although I really corrected 5 Errors, Rogerbot digs out 2 "new" Errors. So does Rogerbot not catch all the errors that have been on my site many weeks before?
Pls see the screenshot how I was chasing the errors
-
I understand,
I am not using a CMS and the site is not very big, so I wondered why Roberbot did not find all the 404 Error at the first time, because they have been there for many months.
Holger
-
Hey Holger,
Our crawler will catch as many errors as it can. It's possible that these errors were not present or just were not found at the time of the crawl.I'm running a crawl test to see if there's any discrepancy between your current campaign crawl and mine just to double-check.
In general, Kyle is correct that sometimes those errors just crop up, especially if you're using any sort of CMS.
I hope that helps. I'll update here after my crawl test is done.
Cheers,
Joel. -
Hi Holger,
4XX Errors can be quite common depending on your site setup so don't be surprised that Roger will keep returning errors for you to fix.
I would advise checking this data against GWT's own crawl error data which you can find in Webmaster Tools under Health>Crawl Errors.
I hope that helps,
K
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Content errors - not going away with canonical
I am getting Duplicate Content Errors reported by Moz on search result pages due to parameters. I went through the document on resolving Duplicate Content errors and implemented the canonical solution to resolve it. The canonical in the header has been in place for a few weeks now and Moz is still showing the pages as Duplicate Content despite the canonical reference. Is this a Moz bug? http://mathematica-mpr.com/news/?facet={81C018ED-CEB9-477D-AFCC-1E6989A1D6CF}
Moz Pro | | jpfleiderer0 -
Rogerbot's crawl behaviour vs google spiders and other crawlers - disparate results have me confused.
I'm curious as to how accurately rogerbot replicates google's searchbot I've currently got a site which is reporting over 200 pages of duplicate/titles content in moz tools. The pages in question are all session IDs and have been blocked in the robot.txt (about 3 weeks ago), however the errors are still appearing. I've also crawled the page using screaming frog SEO spider. According to Screaming Frog, the offending pages have been blocked and are not being crawled. Webmaster tools is also reporting no crawl errors. Is there something I'm missing here? Why would I receive such different results. Which one's should I trust? Does rogerbot ignore robot.txt? Any suggestions would be appreciated.
Moz Pro | | KJDMedia0 -
Weird 404 Errors
Hi All, Although my Moz error scans have been pretty clean for a while, a law firm site I manage recently cropped up with 80+ 404 errors since the last scan. I'm a little baffled as the url it shows being returned looks like this: http://www.yoursite.com/ http://www.yoursite.com/resource.html For some reason it seems to be initiating a query to call the root domain twice before the actual resource. I installed ModX Revolution 2.2.6-PL on the site in question, and am hoping a canonical plugin I just started using will take care of these. Has this happened to anyone else? What did you do to solve the issue? Thanks for your time and any tips!
Moz Pro | | G2W0 -
Dot Net Nuke generating long URL showing up as crawl errors!
Since early July a DotNetNuke site is generating long urls that are showing in campaigns as crawl errors: long url, duplicate content, duplicate page title. URL: http://www.wakefieldpetvet.com/Home/tabid/223/ctl/SendPassword/Default.aspx?returnurl=%2F Is this a problem with DNN or a nuance to be ignored? Can it be controlled? Google webmaster tools shows no crawl errors like this.
Moz Pro | | EricSchmidt0 -
Why am I getting an access error when creating my first campaign?
The exact error message is: The change you wanted was rejected. Maybe you tried to change something you didn't have access to. I checked the Word Count and I am definitely <300 keywords. I have also made sure that the Branded Keywords are entered 1 per entry form. My cookies are clear and there should be no issue with my browser.
Moz Pro | | trufflelabs0 -
Crawl Diagnostics finding pages that dont exist. Will Rel Canon Help?
I have recently set up a campaign for www.completeoffice.co.uk. Im the in-house developer there. When the crawl diagnostics completed, i went to check the results, and to my surprise, it had well over 100 missing or empty title tags. I then clicked it to see what pages, and nearly all the pages it say have missing or empty title tags, DO NOT EXIST. This has really confused me and need help figuring out how to solve this. Can anyone help? Attached image is a screen shot of some of the links it showed me on crawl diagnostics, nearly all of these do not exist. Will the relation Canonical tag in the head section of the actual pages help? For example, The actual page that exist is: www.completeoffice.co.uk/Products.php Whereas, when crawled it actually showed www.completeoffice.co.uk/Products/Products.php Will have the rel can tag in the header of the real products.php solve this?
Moz Pro | | CompleteOffice0