Error 406 with crawler test
-
hi to all. I have a big problem with the crawler of moz on this website: www.edilflagiello.it.
On july with the old version i have no problem and the crawler give me a csv report with all the url but after we changed the new magento theme and restyled the old version, each time i use the crawler, i receive a csv file with this error:
"error 406"
Can you help me to understan wich is the problem? I already have disabled .htacces and robots.txt but nothing.
Website is working well, i have used also screaming frog as well.
-
thank you very much Dirk. this sunday i try to fix all the error and next i will try again. Thanks for your assistance.
-
I noticed that you have a Vary: User-Agent in the header - so I tried visiting your site with js disabled & switched the user agent to Rogerbot. Result: the site did not load (turned endlessly) and checking the console showed quite a number of elements that generated 404's. In the end - there was a timeout.
Try screaming frog - set user agent to Custom and change the values to
Name:Rogerbot
Agent: Mozilla/5.0 (compatible; rogerBot/1.0; UrlCrawler; http://www.seomoz.org/dp/rogerbot)
It will be unable to crawl your site. Check your server configuration - there are issues in how you deal with the Mozbot useragent.
Check the attached images.
Dirk
-
nothing. After i fix all the 40x error the crawler is always empty. Any other ideas?
-
thanks, i'm wait another day
-
I know the Crawl Test reports are cached for about 48 hours so there is a chance that the CSV will look identical to the previous one for that reason.
With that in mind, I'd recommend waiting another day or two before requesting a new Crawl Test or just waiting until your next weekly campaign update, if that is sooner
-
i have fixed all error but csv is always empty and says:
http://www.edilflagiello.it,2015-10-21T13:52:42Z,406 : Received 406 (Not Acceptable) error response for page.,Error attempting to request page
here the printscreen: http://www.webpagetest.org/result/151020_QW_JMP/1/details/
Any ideas? Thanks for your help.
-
thanks a lot guy! I'm going to check this errors before next crawling.
-
Great answer Dirk! Thanks for helping out!
Something else I noticed is that the site is coming back with quite a few errors when I ran it through a 3rd party tool, W3C Markup Validation Service and it also was checking the page as XHTML 1.0 Strict which looks to be common in other cases of 406 I've seen.
-
If you check your page with external tools you'll see that the general status of the page is 200- however there are different elements which generate a 4xx error (your logo generates a 408 error - same for the shopping cart) - for more details you could check this http://www.webpagetest.org/result/151019_29_14E6/1/details/.
Remember that Moz bot is quite sensitive for errors -while browsers, Googlebot & Screaming Frog will accept errors on page, Moz bot stops in case of doubt.
You might want to check the 4xx errors & correct them - normally Moz bot should be able to crawl your site once these errors are corrected. More info on 406 errors can be found here. If you have access to your log files you could check in detail which elements are causing the problems when Mozbot is visiting your site.
Dirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz Crawler fails on the first page
Hi guys, Can anybody shed some light, I'm running a crawl on a client's website but it's failing on the homepage and not crawling any other pages. It appears to be throwing an 804: HTTPS (SSL) error and then terminating the crawl. Now, the page in question was serving up mixed content up until about 4 days ago, but has since been fixed. I read that we should wait at least 48 hours before initiating another crawl to avoid hitting a cached version - which I did, but it still appears to be having issues. Is there anything specific I can do to get around this issue? I'm on a trial account and this feature is one I'm keen to test, so there is a bit of a time constraint. Any help is greatly appreciated! Thanks in advance!
Moz Bar | | philipdanielhayton0 -
Error: 804 : HTTPS (SSL) error encountered when requesting page
In my crawl report I'm getting the error: 804 : HTTPS (SSL) error encountered when requesting page. How can I fix this? .
Moz Bar | | Yesi.Ortega0 -
804 : HTTPS (SSL) Error in Crawl Test
So I am getting this 804 Error but I have checked our Security Certificate and it looks to be just fine. In fact we have another 156 days before renewal on it. We did have some issues with this a couple months ago but it has been fixed. Now, there is a 301 from http to https and I did not start the crawl on https so I am curious if that is the issue? Just wanted to know if anybody else has seen this and if you were able to remedy it? Thanks,
Moz Bar | | DRSearchEngOpt
Chris Birkholm0 -
MOZ crawler has been finding a lot of 803 and 804 errors
During last 3 weeks MOZ crawler has been finding a lot of 803 and 804 errors. Meanwhile all pages seem to be working fine. What could cause it?
Moz Bar | | Paruyr0 -
Ww.domain.com coming up with error
our domain is showing in moz with the following error in crawl reports Crawl Error We were unable to access your homepage, which prevented us from crawling the rest of your site. It is likely that other browsers as well as search engines may encounter this problem and abort their sessions. This could be a temporary outage, but we recommend making sure your network and server are working correctly. note that the url being displayed is ww.domain.com and not www.domain.com . we do not have a 301 in place, we have switched off wildcard forwarding from the server.. its acting as the url is a subdomain that is not working.. should i just ignore it?
Moz Bar | | Direct_Ram0 -
Problem Downloading Crawl Error Report PDF's
I am trying to download the PDF reports for the various 'crawl errors' - now some of them are quite large but would that justify why I am unable to download - the error is a straightforward one, see attached. Any ideas? Andy aDlViIN
Moz Bar | | TomKing0 -
Crwal errors : duplicate content even with canonical links
Hi I am getting some errors for duplicate content errors in my crawl report for some of our products www.....com/brand/productname1.html www.....com/section/productname1.html www.....com/productname1.html we have canonical in the header for all three pages <link rel="canonical" href="www....com productname1.html"=""></link rel="canonical" href="www....com>
Moz Bar | | phes0 -
Crawl Diagnostics: Exlude known errors and others that have been detected by mistake? New moz analytics feature?
I'm curious if the new moz analytics will have the feature (filter) to exclude known errors from the crwal diagnostics. For example, the attached screenshot shows the URL as 404 Error, but it works fine: http://en.steag.com.br/references/owners-engineering-services-gas-treatment-ogx.php To maintain a better overview which errors can't be solved (so I just would like to mark them as "don't take this URL into account...") I will not try to fix them again next time. On the other hand I have hundreds of errors generated by forums or by the cms that I can not resolve on my own. Also these kind of crawl errors I would like to filter away and categorize like "errors to see later with a specialist". Will this come with the new moz analytics? Anyway is there a list that shows which new features will still be implemented? knPGBZA.png?1
Moz Bar | | inlinear0