Tracking a Crawl error
-
Hi All,
If you find a crawl error on your page. How do you find it?
The error only says the URL that is wrong but this is not the location. Can i drill down and find out more information?
Thank you!
-
Hi Martin,
Thanks for coming back to me.
You are spot on... I think I have been sitting at the desk to long today. Zoned out!
Yep, webtools in google shows the URL in full so I can find it now.
Thanks for your help
-
Hi Wayne, Just to clarify you are having issues with a crawl error in Webmaster tools, and when you click the url it works fine? You can use tools such as Google Webmaster tools > Fetch as Google bot and input the url to see how Google Bot see's it in HTML. You can also download a text based browser such as Lynx to view your website very similar to how Google bot see's it. It will be very hard to determine a Crawl Path, as if that information was available we could develop trends and get a little closer to their algorithm. Crawler trackers can, on occasion cause hinderence to the crawler.Hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Any crawl issues with TLS 1.3?
Not a techie here...maybe this is to be expected, but ever since one of my client sites has switched to TLS 1.3, I've had a couple of crawl issues and other hiccups. First, I noticed that I can't use HTTPSTATUS.io any more...it renders an error message for URLs on the site in question. I wrote to their support desk and they said they haven't updated to 1.3 yet. Bummer, because I loved httpstatus.io's functionality, esp. getting bulk reports. Also, my Moz campaign crawls were failing. We are setting up a robots.txt directive to allow rogerbot (and the other bot), and will see if that works. These fails are consistent with the date we switched to 1.3, and some testing confirmed it. Anyone else seeing these types of issues, and can suggest any workarounds, solves, hacks to make my life easier? (including an alternative to httpstatus.io...I have and use screaming frog...not as slick, I'm afraid!) Do you think there was a configuration error with the client's TLS 1.3 upgrade, or maybe they're using a problematic/older version of 1.3?? Thanks -
Technical SEO | | TimDickey0 -
Why does Bing bot crawl so aggressively?
We observer that the Bing bot is crawling our site very aggressively. We set Bing's crawl control so that it should not crawl us during heavy traffic hours, but that did not change a thing. Does anyone have the problem and even better a solution?
Technical SEO | | Roverandom1 -
Error in how URLs were set up, how can it be fixed?
Hi, I managed a website port to a WP responsive deisgn for a client, see http://chicagotelephony.com. Unfortunately, he wanted me to work with a graphic designer rather than a web geek, so the resulting website has messed up URLS, i.e. index.php is smack in the middle of almost all the pages. I know that is all wrong, but I also realized that she was not fluent in the way the Genesis framework was set up or how the particular template I selected, operated. So I just wanted to get it out there.... and now it is live, but has all these errors. Do I have to do 301 redirects? Is there a setting or a button inside of the WP template that would put correct slugs but get rid of the index.pho within the URL? For example, http://chicagotelephony.com/index.php/cloud-based-solutions/ and http://chicagotelephony.com/index.php/var-network-value-added-reseller/ should be chicagotelephony.com/cloud-based-solutions/ and chicagotelephony.com/var-netowrk-value-added-reselle/ and so forth.
Technical SEO | | DianeDP0 -
Duplicate Page Errors
Hey guys, I'm wondering if anyone can help... Here is my issue... Our website:
Technical SEO | | TCPReliable
http://www.cryopak.com
It's built on Concrete 5 CMS I'm noticing a ton of duplicate page errors (9530 to be exact). I'm looking at the issues and it looks like it is being caused by the CMS. For instance the home page seems to be duplicating.. http://www.cryopak.com/en/
http://www.cryopak.com/en/?DepartmentId=67
http://www.cryopak.com/en/?DepartmentId=25
http://www.cryopak.com/en/?DepartmentId=4
http://www.cryopak.com/en/?DepartmentId=66 Do you think this is an issue? Is their anyway to fix this issue? It seems to be happening on every page. Thanks Jim0 -
Crawl Diagnostics Report 500 erorr
How can I know what is causing my website to have 500 errors and how I locate it and fix it?
Technical SEO | | Joseph-Green-SEO0 -
Duplicate title tag error
Hi all, I am new to SEO, and we have just launched a new version of our site (kept the domain name the same though). I keep getting errors for duplicate title tags - e.g. www.sandafayre.com/default.aspx and www.sandafayre.com/Default.aspx, www.sandafayre.com/StampAuctions.aspx and www.sandafayre.com/stampauctions.aspx (plus loads others :o). The only difference each time seems to be the capitalisation of the first character - but I though URLs were not case sensitive? I've been advised to add the rel canonical tag to one of the pages, but the problem is I really only have 1 version of each page! Can anybody help please? Many thanks in advance! Nikki
Technical SEO | | Stampy780 -
Is there a reason to set a crawl-delay in the robots.txt?
I've recently encountered a site that has set a crawl-delay command set in their robots.txt file. I've never seen a need for this to be set since you can set that in Google Webmaster Tools for Googlebot. They have this command set for all crawlers, which seems odd to me. What are some reasons that someone would want to set it like that? I can't find any good information on it when researching.
Technical SEO | | MichaelWeisbaum0 -
How to handle Not found Crawl errors?
I'm using Google webmaster tools and able to see Not found Crawl errors. I have set up custom 404 page for all broken links. You can see my custom 404 page as follow. http://www.vistastores.com/404 But, I have question about it. Will it require to set 301 redirect for broken links which found in Google webmaster tools?
Technical SEO | | CommercePundit0