Why is blocking the SEOmoz crawler considered a red "error?"
-
-
I think because that section is labeled "crawl errors", an area blocked from crawling would be considered an error. I can see where you're coming from, but think of it as an error found with an attempt to crawl, not necessarily an error found in the site itself.
-
So,
about 4xx errors read this article: http://webdesign.about.com/cs/http/p/http4xx.htm
for Seomoz crawler blocked by robots.txt , on this file, you have added two links, and are blocking the search engine robots to crawl/index this pages on their database.
about this error issue read here please: http://www.google.com/support/webmasters/bin/answer.py?answer=156449
hope helps,
thanks
-
It seems to me that it should be a "Notice" not an "Error." I am intentionally blocking bots from a defunct directory. Keeping SEOmoz out of an old directory should not (does not?) affect SEO, you know?
-
Sorry about that. I uploaded it 3 times and finally noticed the "Update" button after uploading on the 3rd attempt.
-
Hi, i can´t see the attached image, upload it on any imageshack or something like that and share here the url, and i will try to help you.
If the semozbot find errors on crawling,this mean your site have failures on programming of your site, it fails the " search engine friendly " optimisation.
send me image, i will try to help you.
-
wheres the attached image? its only an error b/c then they cant crawl and build data but thats just a guess
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Functionality of SEOmoz crawl page reports
I am trying to find a way to ask SEOmoz staff to answer this question because I think it is a functionality question so I checked SEOmoz pro resources. I also have had no responses in the Forum too it either. So here it is again. Thanks much for your consideration! Is it possible to configure the SEOMoz Rogerbot error-finding bot (that make the crawl diagnostic reports) to obey the instructions in the individual page headers and http://client.com/robots.txt file? For example, there is a page at http://truthbook.com/quotes/index.cfm month=5&day=14&year=2007 that has – in the header -
Moz Pro | | jimmyzig
<meta name="robots" content="noindex"> </meta name="robots" content="noindex"> This page is themed Quote of the Day page and is duplicated twice intentionally at http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2004 and also at http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2010 but they all have <meta name="robots" content="noindex"> in them. So Google should not see them as duplicates right. Google does not in Webmaster Tools.</meta name="robots" content="noindex"> So it should not be counted 3 times? But it seems to be? How do we gen a report of the actual pages shown in the report as dups so we can check? We do not believe Google sees it as a duplicate page but Roger appears too. Similarly, one can use http://truthbook.com/contemplative_prayer/ , here also the http://truthbook.com/robots.txt tells Google to stay clear. Yet we are showing thousands of dup. page content errors when Google Webmaster tools as shown only a few hundred configured as described. Anyone? Jim0 -
404: Error - MBP Ninja Affiliate
Hello, I use the plugin MBP Ninja Affiliate to redirect links. I did Crawl Diagnostics and it appears 404: Error, but the link is working, it exists. Why Crawl Diagnostics appear 404: Error?
Moz Pro | | antoniojunior0 -
SEOmoz duplicate content checker
From my reports in seomoz i can see pages that are showing as having duplicate content but when i click on them it does not show me which pages are carrying the duplicate content? Is there any way to check this via semoz reports?
Moz Pro | | jazavide0 -
SEOmoz tool Issue?
Hi Mozzers, I am doing a web maintenance task for a client and it's been weeks that Moz is detecting 49 duplicate pages ( contact page). I thought resolving the issue when creating the xml sitemap and excluding those duplicates. The moz tool would still detect them, so I went in making a search with some of these duplicate to check if they were indexed but non of them were indexed. So my question is has anyone recently experienced similar issues? Is the moz tool not 100% accurate? Thanks for sharing your thoughts and answers
Moz Pro | | Ideas-Money-Art0 -
SEOMOZ Crawl Test
Guys I really have an issue that i know have but cannot see if that makes sense. Basically 3 months ago i did a site wide 301 from economyleasinguk.co.uk to www.economy-car-leasing.co.uk Every thing looks good get all the correct header responses , all canonicals work perfectly , Google webmaster tools is updated fetch as google bot shows the old site is 301 I tried the seomoz crawl test today on the old domain and got this message Oh no! Looks like the page you were trying to access is temporarily down which at first thought ok because the site was not there it wont do it on an old 301 domain, however i tried it on a domain i know has just been 301'd and i got this message The URL http://www.site1.com/ redirects to http://site2.com/. Do you want to crawl http://site2.com/ instead?
Moz Pro | | kellymandingo
Would you like to:
Continue with www.site1.com
Continue with site2.com I really do not know what to do, its either the redirect script is missing something however its doing what it should or the server is a problem but again its doing what it should so why would SEOMOZ not be able to crawl the old URL like it example site above. Now the strange thing is Open Site Explorer does see the 301 and asks if i want to check the new URL instead Ps the redirect is done using PHP redirect which i am asking him to change to a htaccess as its now on a apache server and was wondering if this could be an issue, all pages go to correct pages as requested Thanks in Advance1 -
Can I calculate "Keyword Difficulty" metric using Mozscape API data?
We already have a web application that pulls certain metrics about websites using the Mozscape API, but we are wanting to extend the usefulness of this application to enable users of the app to pull "Keyword Difficulty" metrics in bulk, instead of one at a time (or 5 at a time). I wouldn't mind the 5 at a time limitation if we could just automate the API calls and let the tool pull data for 50 or so keywords without user-interaction. I know that it's a "formula", but I don't know what SEOMoz uses for it's formula. Has anyone figured out a way to calculate this, based on the Mozscape API data? Has anyone ever tried to reverse engineer this metric?
Moz Pro | | brchap0 -
Why does it take so long for SeoMoz to update data?
I changed the Anchor Text of 4 40/100 MozRank sites 2 months ago, yet SeoMoz still shows the old Anchor text in the reports. Why is this taking so long? I also notice my inbound domains hasn't increased nor has my MozRank in 3-4 weeks. What's the turnaround?
Moz Pro | | sanchez19600 -
Site explore reporting error over week
unable to dispaly anchor text error Doh! Roger is still working out the kinks with the new index and is having issues untangling anchor text data. We're currently showing anchor text data from the previous index, but we will update as soon as we can.
Moz Pro | | 1step2heaven120