Error 406 with crawler test
-
hi to all. I have a big problem with the crawler of moz on this website: www.edilflagiello.it.
On july with the old version i have no problem and the crawler give me a csv report with all the url but after we changed the new magento theme and restyled the old version, each time i use the crawler, i receive a csv file with this error:
"error 406"
Can you help me to understan wich is the problem? I already have disabled .htacces and robots.txt but nothing.
Website is working well, i have used also screaming frog as well.
-
thank you very much Dirk. this sunday i try to fix all the error and next i will try again. Thanks for your assistance.
-
I noticed that you have a Vary: User-Agent in the header - so I tried visiting your site with js disabled & switched the user agent to Rogerbot. Result: the site did not load (turned endlessly) and checking the console showed quite a number of elements that generated 404's. In the end - there was a timeout.
Try screaming frog - set user agent to Custom and change the values to
Name:Rogerbot
Agent: Mozilla/5.0 (compatible; rogerBot/1.0; UrlCrawler; http://www.seomoz.org/dp/rogerbot)
It will be unable to crawl your site. Check your server configuration - there are issues in how you deal with the Mozbot useragent.
Check the attached images.
Dirk
-
nothing. After i fix all the 40x error the crawler is always empty. Any other ideas?
-
thanks, i'm wait another day
-
I know the Crawl Test reports are cached for about 48 hours so there is a chance that the CSV will look identical to the previous one for that reason.
With that in mind, I'd recommend waiting another day or two before requesting a new Crawl Test or just waiting until your next weekly campaign update, if that is sooner
-
i have fixed all error but csv is always empty and says:
http://www.edilflagiello.it,2015-10-21T13:52:42Z,406 : Received 406 (Not Acceptable) error response for page.,Error attempting to request page
here the printscreen: http://www.webpagetest.org/result/151020_QW_JMP/1/details/
Any ideas? Thanks for your help.
-
thanks a lot guy! I'm going to check this errors before next crawling.
-
Great answer Dirk! Thanks for helping out!
Something else I noticed is that the site is coming back with quite a few errors when I ran it through a 3rd party tool, W3C Markup Validation Service and it also was checking the page as XHTML 1.0 Strict which looks to be common in other cases of 406 I've seen.
-
If you check your page with external tools you'll see that the general status of the page is 200- however there are different elements which generate a 4xx error (your logo generates a 408 error - same for the shopping cart) - for more details you could check this http://www.webpagetest.org/result/151019_29_14E6/1/details/.
Remember that Moz bot is quite sensitive for errors -while browsers, Googlebot & Screaming Frog will accept errors on page, Moz bot stops in case of doubt.
You might want to check the 4xx errors & correct them - normally Moz bot should be able to crawl your site once these errors are corrected. More info on 406 errors can be found here. If you have access to your log files you could check in detail which elements are causing the problems when Mozbot is visiting your site.
Dirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawler triggering Spam Throttle and creating 4xx errors
Hey Folks, We have a client with an experience I want to ask about. The Moz crawler is showing 4xx errors. These are happening because the crawler is triggering my client's spam throttling. They could increase from 240 to 480 page loads per minute but this could open the door for spam as well. Any thoughts on how to proceed? Thanks! Kirk
Moz Bar | | kbates1 -
Crawl tests stuck in queue
I have tried to run a number of crawl tests recently for our client's sites outside the US and they have been stuck in the queue for over a week. 3 of them completed, but then 5 are stuck. Anyone experience this? I haven't seen anything about crawl tests having issues right now.
Moz Bar | | rmcgrath810 -
902 Error and Page Size Limit
Hello, I am getting a 902 error when attempting to crawl one of my websites that was recently upgraded to a modern platform to be mobile friendly, https, etc. After doing some research it appears this is related to the page size. On Moz's 902 error description it states: "Pages larger than 2MB will not be crawled. For best practices, keep your page sizes to be 75k or less." It appears all pages on my site are over 2MB because Rogbot is no longer doing any crawling and not reporting issues besides the 902. This is terrible for us because we purchased MOZ to track and crawl this site specifically. There are many articles which show the average page size on the web is well over 2MB now: http://www.wired.com/2016/04/average-webpage-now-size-original-doom/ Due to that I would imagine other users have come up against this as well and I'm wondering how they handled it. I hope Moz is planning to increase the size limit on Rogbot as it seems we are on a course towards sites becoming larger and larger. Any insight or help is much appreciated!
Moz Bar | | Paul_FL0 -
What is Considered Duplicate Content by Crawlers?
I am asking this because I have a couple of site audit tools that I use to crawl a site I work on every week and they are showing duplicate content issues (which I know there is a lot on this site) but some of what is flagged as duplicate content makes no sense. For example, the following URL's were grouped together as duplicate content: | https://www.firefold.com/contact-us | https://www.firefold.com/gabe | https://www.firefold.com/sale | | | How are these pages duplicate content? I am confused on what site audit tools are considering duplicate content. Just FYI, this is data from Moz crawl diagnostics but SEMrush site auditor is giving me the same type of data. Any help would be greatly appreciated. Ryan
Moz Bar | | RyanRhodes0 -
Is there a tool that works like crawl test that allows more than 3000 pages?
I enjoy using crawl test inside moz but I need to find a way to crawl all the pages on a site. It would probably be in the neighborhood of 10,000 urls. Does anyone know of a free tool and if not is there a paid tool that will do this?
Moz Bar | | bradwayland0 -
Moz Crawl Test Trying to Crawl Contact Form Submit Button Location?
Moz Crawl Test for some reason is trying to Crawl a contact form Widget Submit Location. My obvious guess is that obviously the crawl cannot submit to the required fields…..I believe this because they're only kicking back these errors on the pages I have a contact form widget on. http://crawfordspest.com/pest-control/crawfords@crawfordspest.com 1412553693 404 : Received 404 (Not Found) error response for page. Error attempting to request page; see title for details. 404
Moz Bar | | Funk-Creative-Media
http://crawfordspest.com/tree-services/crawfords@crawfordspest.com 1412553693 404 : Received 404 (Not Found) error response for page. Error attempting to request page; see title for details. 404
http://crawfordspest.com/lawn-care/crawfords@crawfordspest.com 1412553693 404 : Received 404 (Not Found) error response for page. Error attempting to request page; see title for details. 404
http://crawfordspest.com/specialty-services/crawfords@crawfordspest.com 1412553693 404 : Received 404 (Not Found) error response for page. Error attempting to request page; see title for details. 404 Can you shed any insight to this? I'm a bit worried that I'll have to complete gut the contact form which was one of the major requests my client requested. Or in a worse scenario make all fields not required. It would let so much spam in. I have never seem anything like this at all. But I've learned a lot from Moz, and with major errors like 404 damage Domain Authority greatly. I've fixed 404 issues with newly acquired clients existing sites and tracked through Moz and the domain authority flies up once these errors are fixed. Along with fixing what Webmaster Tools through Google reports back. ..... Let me know if you have any expertise on this matter.0 -
Moz Crawl Test says pages have no internal links
Greetings, I am working on a website, https://www.nasscoinc.com, and ran a Moz Crawl Test on it. According to the crawl test, only 2 of the website's hundreds of pages are receiving internal links. When I run a similar test on the site using Screaming Frog, I see that most of the pages have at least one internal link. I'm wondering if anyone has seen this before with the crawl test; and there is a way to get the crawl test to see the internal links? Thanks!
Moz Bar | | TopFloor0