Spider Indexed Disallowed URLs
-
Hi there,
In order to reduce the huge amount of duplicate content and titles for a cliënt, we have disallowed all spiders for some areas of the site in August via the robots.txt-file. This was followed by a huge decrease in errors in our SEOmoz crawl report, which, of course, made us satisfied.
In the meanwhile, we haven't changed anything in the back-end, robots.txt-file, FTP, website or anything. But our crawl report came in this November and all of a sudden all the errors where back. We've checked the errors and noticed URLs that are definitly disallowed. The disallowment of these URLs is also verified by our Google Webmaster Tools, other robots.txt-checkers and when we search for a disallowed URL in Google, it says that it's blocked for spiders. Where did these errors came from? Was it the SEOmoz spider that broke our disallowment or something? You can see the drop and the increase in errors in the attached image.
Thanks in advance.
[
](<a href=)" target="_blank">a> [
](<a href=)" target="_blank">a> LAAFj.jpg
-
This was what I was looking for! The pages are indexed by Google, yes, but they aren't being crawled by the Googlebot (as my Webmaster Tool and the Matt Cutts Video is telling me), but they are occasionally being crawled by the Rogerbot probably (not monthly). Thank you very much!
-
Yes yes, canonicalization or meta noindex-tag would be better of course to pass the possible link juice, but we aren't worried about that. I was worried Google would still see the pages as duplicates. (couldn't really distile that out of the article, although it was useful!) Barry Smith answered that last issue in the answer below, but i do want to thank you for your insight.
-
The directives issued in a robots.txt file are just a suggestion to bots. One that Google does follow though.
Malicious bots will ignore them and occasionally even bots that follow the directives may mess up (probably what's happened here).
Google may also index pages that you've blocked as they've found them via a link as explained here - http://www.youtube.com/watch?v=KBdEwpRQRD0 - or for an overview of what Google does with robots.txt files you can read here - http://support.google.com/webmasters/bin/answer.py?hl=en&answer=156449
I'd suggest you look at other ways of fixing the problem than just blocking 1500 pages but I see you've considered what would be required to fix the issues without removing the pages from a crawl and decided the value isn't there.
If WMT is telling you the pages are blocked from being crawled I'd believe that.
Try searching for a url that should be blocked in Google and see if it's indexed or do site:http://yoursitehere.com and see if blocked pages come up.
-
The assumptions of what to expect from using robots.txt may not be in line with the realities. Crawling a page isn't the same thing as indexing the content to appear in SERPs and even with robots, your pages can be crawled.
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions
-
Thanks mister Goyal. Of course we have been thinking about ways and figured out some options in doing so, but implementing these solutions would be disastreous from a time/financial perspective. The pages that we have blocked from the spiders aren't needed for visibility in the search engines and don't carry much link juice, they are only there for the visitors, so we decided we don't really need them for our SEO-efforts in a positive way. But when these pages do get crawled and the engines notice the huge amount of duplicates, i recogn this would have a negative influence on our site as a whole.
So, the problem we have is focused on the doubts we have on the legitimacy of the report. If SEOMoz can crawl it, the Googlebot could probably too, right, since we've used: User-agent: *
-
Mark
Are you blocking all your bots to spider these erroneous URLs ? Is there a way for you to fix these such that either they don't exist or they are not duplicate anymore.
I'd just recommend looking from that perspective as well. Not just the intent of making those errors disappear from the SEOMoz report.
I hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google indexing is slowing down?
I have up to 20 million unique pages, and so far I've only submitted about 30k of them on my sitemap. We had a few load related errors during googles initial visits, and it thought some were duplicates, but we fixed all that. We haven't gotten a crawl related error for 2 weeks now. Google appears to be indexing fewer and fewer urls every time it visits. Any ideas why? I am not sure how to get all our pages indexed if its going to operate like this... love some help thanks! HnJaXSM.png
Technical SEO | | RyanTheMoz0 -
Removing indexed pages
Hi all, this is my first post so be kind 🙂 - I have a one page Wordpress site that has the Yoast plugin installed. Unfortunately, when I first submitted the site's XML sitemap to the Google Search Console, I didn't check the Yoast settings and it submitted some example files from a theme demo I was using. These got indexed, which is a pain, so now I am trying to remove them. Originally I did a bunch of 301's but that didn't remove them from (at least not after about a month) - so now I have set up 410's - These also seem to not be working and I am wondering if it is because I re-submitted the sitemap with only the index page on it (as it is just a single page site) could that have now stopped Google indexing the original pages to actually see the 410's?
Technical SEO | | Jettynz
Thanks in advance for any suggestions.0 -
URL Parameters for Product Variation
Hey there Mozzers, So I have a website that uses URL Parameters for product variations. So for example when I am in the product "test" that product has 5 different variations. So the url is as followed /product/test /products/test?variant=10271886529 /products/test?variant=10271886530 etc. Does Google understand that this is the same page ? Does it automatically exclude the variable in the url?
Technical SEO | | AngelosS0 -
Site not getting indexed by googlebot.
The following question is in regards to http://footeschool.org/. This site is not getting indexed with google(googlebot) This only happens when the user agent is set googlebot. This is a recent issue. We are using DNN as CMS. Are there any suggestion to help resolve this issue?
Technical SEO | | bcmull0 -
Z-indexed content
I have some content on a page that I am not using any type of css hiding techniques, but I am using an image with a higher z-index in order to prevent the text from being seen until a user clicks a link to have the content scroll down. Are there any negative repercussions for doing this in regards to SEO?
Technical SEO | | cokergroup0 -
Keywords, when are you overdoing it in the URL?
Hi guys, I'm auditing a site covering compensation for cancer. Keywords could include: Undiagnosed cancer 20 cancer compensation 10 undiagnosed cancer symptoms 10 cancer misdiagnosis claims 20 cancer claims 10 misdiagnosis of cancer 50 cancer misdiagnosis 70 So, when structuring the URL for the category, this was previously selected: www.site.co.uk/medical-negligence/cancer-misdiagnosis Although sub-pages appear like this: www.site.co.uk/medical-negligence/cancer-misdiagnosis/breast-cancer-misdiagnosis-claim/ 'Cancer misdiagnosis' as a keyword attracts the most traffic, but if we're using it on sub-pages - is there a need to include it twice on all sub-page URLs? With that in mind, would it be better to follow the following format? www.site.co.uk/medical-negligence/cancer-compensation www.site.co.uk/medical-negligence/cancer-compensation/breast-cancer-misdiagnosis-claim/ Or is there a better way to structure this? Thanks in advance guys!
Technical SEO | | Muhammad-Isap0 -
URL Structure for Deal Aggregator
I have a website that aggregates deals from various daily deals site. I originally had all the deals on one page /deals, however I thought that maybe it might be more useful to have several pages e.g. /beautydeals or /hoteldeals. However if I give every section it's own page that means I have either no current deals on the main /deals page or I will have duplicate content. I'm wondering what might be the best approach here? A few of the options that come to mind are: 1. Return to having all the deals on one page /deals and linking internally to content within that page
Technical SEO | | andywozhere
2. Have both a main /deals page with all of the deals plus other pages such as /beautydeals, but add re="canonical" to point to the main /deals page
3. Create new content for the /deals page... however I think people will probably want to see at least some deals straight away, rather than having to click through to another page.
4. Display some sub-categories on the main /deals page, but have separate URLs for other more popular sub-categories e.g. /beautydeals (this is how it works at the moment) I should probably point out that the site also has other content such as events and a directory. Any suggestions on how best to approach this much appreciated! Cheers, Andy0 -
Long URL
I am using seomoz software as a trial, it has crawled my site and a report is telling me that the URL for my forum is to long: <dl> <dt>Title</dt> <dd>Healthy Living Community</dd> <dt>Meta Description</dt> <dd>Healthy life discussion forum chatting about all aspects of healthy living including nutrition, fitness, motivation and much more.</dd> <dt>Meta Robots</dt> <dd>noodp, noydir</dd> <dt>Meta Refresh</dt> <dd>Not present/empty</dd> <dd> 1 Warning Long URL (> 115 characters) Found about 17 hours ago <dl> <dt>Number of characters</dt> <dd>135 (over by 21)</dd> <dt>Description</dt> <dd>A good URL is descriptive and concise. Although not a high priority, we recommend a URL that is shorter than 75 characters.</dd> </dl> </dd> <dd> URL: http://www.goodhealthword.com/forum/reprogramming-health/welcome-to-the-forum-for-discussing-the-4-steps-for-reprogramming-ones-health/ The problem is when I check the page via edit or in the admin section of wordpress, the url is a s follows: http://www.goodhealthword.com/forum/ My question is where is I cannot see where this long url is located, it appears to be a valid page but I cant find it. Thanks Pete </dd> </dl>
Technical SEO | | petemarko0