Crawl Errors Confusing Me
-
The SEOMoz crawl tool is telling me that I have a slew of crawl errors on the blog of one domain. All are related to the MSNbot. And related to trackbacks (which we do want to block, right?) and attachments (makes sense to block those, too) ... any idea why these are crawl issues with MSNbot and not Google? My robots.txt is here: http://www.wevegotthekeys.com/robots.txt.
Thanks, MJ
-
I'm a little late to the party, but I want to summarize what I see as the answer.
1. The "Search Engine Blocked by Robots.txt" is only a warning, and not an error. If you intend for these pages not to get crawled (and it does seem like you have a good reason for this), then there is nothing to worry about.
2. The reason the warning appears for MSNbot and not Google is that currently, your robots.txt allows Google to crawl those files. As Daniel pointed out, you would need to add the identical directives to your robots.txt file to make this happen. Does that make sense? Or you could just add all of these files under the * directive to apply to all robots.
-
Yes, I thought that's what you meant ... thanks!
-
I am saying this:
User-agent: Googlebot Noindex: /key-west-blog/*?* Noindex: /key-west-blog/*.rss Noindex: /key-west-blog/*feed Noindex: /key-west-blog/*trackback Noindex: /key-west-blog/*wp- Noindex: /key-west-blog/tag/ Noindex: /key-west-blog/search/ Noindex: /key-west-blog/archives/ Noindex: /key-west-blog/category/ Noindex: /key-west-blog/2009 Noindex: /key-west-blog/2010 and this:
User-agent: Googlebot-Mobile
Noindex: /key-west-blog/?
Noindex: /key-west-blog/*.rss
Noindex: /key-west-blog/*feed
Noindex: /key-west-blog/*trackback
Noindex: /key-west-blog/*wp-
Noindex: /key-west-blog/tag/
Noindex: /key-west-blog/search/
Noindex: /key-west-blog/archives/
Noindex: /key-west-blog/category/
Noindex: /key-west-blog/2009
Noindex: /key-west-blog/2010They use Noindex which is a syntax I am unfamiliar with in robots.txt. So you can check out http://www.robotstxt.org/robotstxt.html for more info on robots.txt and proper syntaxt. I would change Noindex: to Disallow: and that should fix the error in the robots.txt file.
-
The robots.txt file DOES contain
User-agent: Msnbot Crawl-delay: 120 Disallow: /key-west-blog/*?* Disallow: /key-west-blog/*.rss Disallow: /key-west-blog/*feed Disallow: /key-west-blog/*trackback Disallow: /key-west-blog/*wp- Disallow: /key-west-blog/*login.php Disallow: /key-west-blog/tag/ Disallow: /key-west-blog/search/ Disallow: /key-west-blog/archives/ Disallow: /key-west-blog/category/ Disallow: /key-west-blog/2009 Disallow: /key-west-blog/2010 But you are saying I should remove the lines with noindex?
-
In your robots.txt file, you have the Disallow: command under MSNbot and Noindex: under Googlebot. Noindex is not a robots.txt command. Change Noindex: to Disallow: and those pages will be blocked for all bots. Not sure if that is what is causing the issue, but that would explain the discrepancy. If you want to noindex a page, you do it with a meta tag like this:
You can change follow to nofollow if you want, really doesn't matter much.
-
I have the same problem looks like MSN bot is disallowed from accessing wordpress content. So pages show up as ?page=111 so from what I understand so far anything that shows as below is blocked from MSNbot. I don't have a definite answer for you as to what to do, but I can tell you will need to "allow" msn bot the googlebot is.
Disallow: /key-west-blog/*?*
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I set blog category/tag pages as "noindex"? If so, how do I prevent "meta noindex" Moz crawl errors for those pages?
From what I can tell, SEO experts recommend setting blog category and tag pages (ie. "http://site.com/blog/tag/some-product") as "noindex, follow" in order to keep the page quality of indexable pages high. However, I just received a slew of critical crawl warnings from Moz for having these pages set to "noindex." Should the pages be indexed? If not, why am I receiving critical crawl warnings from Moz and how do I prevent this?
Moz Pro | | NichGunn0 -
Can someone kindly explain what 'Crawl Issue Found: No rel="canonical" Tags' means? Is this a critical error and how can it be rectified?
Can someone kindly explain what 'Crawl Issue Found: No rel="canonical" Tags' means? Is this a critical error and how can it be rectified?
Moz Pro | | JoshMcLean0 -
Lag time between MOZ crawl and report notification?
I did a lot of work to one of my sites last week and eagerly awaited this week's MOZ report to confirm that I had achieved what I was trying to do, but alas I still see the same errors and warnings in the latest report. This was supposedly generated five days AFTER I made the changes, so why are they not apparent in the new report? I am mainly referring to missing metadata, long page titles, duplicate content and duplicate title errors (due to crawl and URL issues). Why would the new crawl not have picked up that these have been corrected? Does it rely on some other crawl having updated (e.g. Google or Bing)?
Moz Pro | | Gavin.Atkinson0 -
Crawl Diagnostics - Historical Summary
As we've been fixing errors on our website, the crawl diagnostic graphs have been showing great results (top left to bottom right for errors). The problem is the graphs themselves aren't very pretty. I can't use them in my internal reports (all internal reports are standardised colours/formats). Is there anyway of exporting the top level summary with historic data so the graphs can be recreated in company colours? I don't want the detailed CSV breakdown of what errors occurred, but rather than on X date there were Y errors, the next month Z errors and so forth. The data must already be in the SEOMoz system in order to create the graphs themselves - I was hoping this can be made available to us if it isn't already? Does anyone know if there is already a way of doing this? I've tried to 'inspect element' and find the underlying data in the source code but to no avail, and can't see any exports that would do this. Thanks in advance Dean
Moz Pro | | FashionLux0 -
Crawl diagnostics taking too long
I started a crawl 2 days ago and it was still going after almost 48 hours so I deleted the entire campaign and resubmitted it. It's been 13 hours and still going. What happened to getting initial results in 2 hours? I've never had this problem and have run several campaign crawls here. Just wondering if there is a known issue I just can't seem to find? Thank you
Moz Pro | | LisaS130 -
Getting Redirect Loop and Oauth Error When Adding Facebook Page
Hi all, I keep getting the following error when trying to add my Facebook page. It worked fine in the past and has suddenly stopped working: The webpage at https://graph.facebook.com/oauth/authorize?client_id=142287725855094&redirect_uri=http%3A%2F%2Fpro.seomoz.org%2Fcampaigns%2F173488%2Fsocial%2Fcreate%2Ffacebook%2F127833296954.html&scope=read_stream%2Cuser_videos%2Cuser_photos%2Cuser_photo_video_tags%2Cmanage_pages%2Cread_insights has resulted in too many redirects. Clearing your cookies for this site or allowing third-party cookies may fix the problem. If not, it is possibly a server configuration issue and not a problem with your computer. I've tried clearing cache and deleting cookies. Any other ideas I would try? Thanks!
Moz Pro | | kenc1380 -
Our Duplicate Content Crawled by SEOMoz Roger, but Not in Google Webmaster Tools
Hi Guys, We're new here and I couldn't find the answer to my question. Here it goes: We had SEOMoz's Roger Crawl all of our pages and he came up with quite a few erros (Duplicate Content, Duplicate Page Titles, Long URL's). Per our CTO and using our Google Webmaster Tools, we informed Google not to index those Duplicate Content Pages. For our Long URL Errors, they are redirected to SEF URL's. What we would like to know is if Roger is able to know that we have instructed Google to not index these pages. My concern is Should we still be concerned if Roger is still crawling those pages and the errors are not showing up in our Webmaster Tools Is there a way we can let Roger know so they don't come up as errors in our SEOMoz Tools? Thanks so much, e
Moz Pro | | RichSteel0 -
Crawl Diagnostics Error Spike
With the last crawl update to one of my sites there was a huge spike in errors reported. The errors jumped by 16,659 -- majority of which are under the duplicate title and duplicate content category. When I look at the specific issues it seems that the crawler is crawling a ton of blank pages on the sites blog through pagination. The odd thing is that the site has not been updated in a while and prior to this crawl on Jun 4th there were no reports of these blank pages. Is this something that can be an error on the crawler side of things? Any suggestions on next steps would be greatly appreciated. I'm adding an image of the error spike Xovep.jpg?1 Xovep.jpg?1
Moz Pro | | VanadiumInteractive1