Crawl Errors Confusing Me
-
The SEOMoz crawl tool is telling me that I have a slew of crawl errors on the blog of one domain. All are related to the MSNbot. And related to trackbacks (which we do want to block, right?) and attachments (makes sense to block those, too) ... any idea why these are crawl issues with MSNbot and not Google? My robots.txt is here: http://www.wevegotthekeys.com/robots.txt.
Thanks, MJ
-
I'm a little late to the party, but I want to summarize what I see as the answer.
1. The "Search Engine Blocked by Robots.txt" is only a warning, and not an error. If you intend for these pages not to get crawled (and it does seem like you have a good reason for this), then there is nothing to worry about.
2. The reason the warning appears for MSNbot and not Google is that currently, your robots.txt allows Google to crawl those files. As Daniel pointed out, you would need to add the identical directives to your robots.txt file to make this happen. Does that make sense? Or you could just add all of these files under the * directive to apply to all robots.
-
Yes, I thought that's what you meant ... thanks!
-
I am saying this:
User-agent: Googlebot Noindex: /key-west-blog/*?* Noindex: /key-west-blog/*.rss Noindex: /key-west-blog/*feed Noindex: /key-west-blog/*trackback Noindex: /key-west-blog/*wp- Noindex: /key-west-blog/tag/ Noindex: /key-west-blog/search/ Noindex: /key-west-blog/archives/ Noindex: /key-west-blog/category/ Noindex: /key-west-blog/2009 Noindex: /key-west-blog/2010 and this:
User-agent: Googlebot-Mobile
Noindex: /key-west-blog/?
Noindex: /key-west-blog/*.rss
Noindex: /key-west-blog/*feed
Noindex: /key-west-blog/*trackback
Noindex: /key-west-blog/*wp-
Noindex: /key-west-blog/tag/
Noindex: /key-west-blog/search/
Noindex: /key-west-blog/archives/
Noindex: /key-west-blog/category/
Noindex: /key-west-blog/2009
Noindex: /key-west-blog/2010They use Noindex which is a syntax I am unfamiliar with in robots.txt. So you can check out http://www.robotstxt.org/robotstxt.html for more info on robots.txt and proper syntaxt. I would change Noindex: to Disallow: and that should fix the error in the robots.txt file.
-
The robots.txt file DOES contain
User-agent: Msnbot Crawl-delay: 120 Disallow: /key-west-blog/*?* Disallow: /key-west-blog/*.rss Disallow: /key-west-blog/*feed Disallow: /key-west-blog/*trackback Disallow: /key-west-blog/*wp- Disallow: /key-west-blog/*login.php Disallow: /key-west-blog/tag/ Disallow: /key-west-blog/search/ Disallow: /key-west-blog/archives/ Disallow: /key-west-blog/category/ Disallow: /key-west-blog/2009 Disallow: /key-west-blog/2010 But you are saying I should remove the lines with noindex?
-
In your robots.txt file, you have the Disallow: command under MSNbot and Noindex: under Googlebot. Noindex is not a robots.txt command. Change Noindex: to Disallow: and those pages will be blocked for all bots. Not sure if that is what is causing the issue, but that would explain the discrepancy. If you want to noindex a page, you do it with a meta tag like this:
You can change follow to nofollow if you want, really doesn't matter much.
-
I have the same problem looks like MSN bot is disallowed from accessing wordpress content. So pages show up as ?page=111 so from what I understand so far anything that shows as below is blocked from MSNbot. I don't have a definite answer for you as to what to do, but I can tell you will need to "allow" msn bot the googlebot is.
Disallow: /key-west-blog/*?*
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Rookie question re Moz Crawl errors after deleting a property from console.
Hi all, I stupidly removed the "http" url of my one website a few days back (it is one of three, the other two being the https), then re-added it around a day later and, while google console isn't reporting back any errors, Moz Crawl is going to town on this now for one critical "4xx" issues, canonicals and various other content issues that I addressed days previously...last Moz crawl performed an hour ago, url deleted and re-added two days ago. I have resubmitted a sitemap, will this smooth itself out or shall I go and make changes? Many thanks in advance.
Moz Pro | | UkPod0 -
Should I set blog category/tag pages as "noindex"? If so, how do I prevent "meta noindex" Moz crawl errors for those pages?
From what I can tell, SEO experts recommend setting blog category and tag pages (ie. "http://site.com/blog/tag/some-product") as "noindex, follow" in order to keep the page quality of indexable pages high. However, I just received a slew of critical crawl warnings from Moz for having these pages set to "noindex." Should the pages be indexed? If not, why am I receiving critical crawl warnings from Moz and how do I prevent this?
Moz Pro | | NichGunn0 -
Why for all my campaigns it is always shows that the number of pages crawled as 1
Hi All, I am new to moz. Can anyone help to solve my problem. I am signed up for a pro account and taking a free trial. and I've created 3 campaigns, for everything, the number of pages crawled is shown as 1 (i.e there are only one page is crawled for a given url, it doesn't crawl my pages comes through that url, like pagination and etc.) Anyone please tell me, Is this is error due to my site or any activity in my campaign.
Moz Pro | | sandy7th0 -
Where is the crawl test tool located in this new site?
Hi there, Where is the crawl test tool located in this new moz site? Formerly it was, http://pro.seomoz.org/tools/crawl-test Hope you can help. Thanks:)
Moz Pro | | steveovens1 -
Crawl Diagnostics Shows thousands of 302's from a single url. I'm confused
Hi guys I just ran my first campaign and the crawl diagnostics are showing some results I'm unfamiliar with.. In the warnings section it shows 2,838 redirects.. this is where I want to focus. When I click here it shows 5 redirects per page. When I go to click on page 2, or next page, or any other page than page 1 for that matter... this is where things get confusing. Nothing shows. Downloading the csv reveals that 2,834 of these are all showing: URL: http://www.mydomain.com/401/login.php url: http://www.mydomain.com/401/login.php referrer: http://www.mydomain.com/401/login.php location_header: http://www.mydomain.com/401/login.php I guess I'm just looking for an explanation as to why it's showing so many to the same page and what possible actions can be taken on my part to correct it (if needed). Thanks in advance
Moz Pro | | sethwb0 -
Pages Crawled: 250 | Limit: 250
One of my campaigns says: Pages Crawled: 250 | Limit: 250 Is this because it's new and the limit will go up to 10,000 after the crawl is complete? I have a pro account, 4 other campaigns running and should be allowed 50,000 pages in total
Moz Pro | | MirandaP0 -
Ruling out subfolders in pro tool crawl
Is there a way to "rule out" a subfolder in the pro dashboard site crawl? We're working on a site that has 500,000+ pages in the forums, but its the CMS pages we're optimizing and don't want to spend the 10k limit on forum pages.
Moz Pro | | DeepRipples0 -
Unsubscribe to weekly crawl notifications never works
Hello! All of my campaigns have the box 'Weekly crawl completed for campaign ...' unticked under Campaign Settings, yet for all of them I still receive an email regularly with the subject 'New crawl completed for ...'. How do I stop this? Is there a bug here? Adam Bishop
Moz Pro | | arbishop0