Google: How to See URLs Blocked by Robots?
-
Google Webmaster Tools says we have 17K out of 34K URLs that are blocked by our Robots.txt file.
How can I see the URLs that are being blocked?
Here's our Robots.txt file.
User-agent: * Disallow: /swish.cgi Disallow: /demo Disallow: /reviews/review.php/new/ Disallow: /cgi-audiobooksonline/sb/order.cgi Disallow: /cgi-audiobooksonline/sb/productsearch.cgi Disallow: /cgi-audiobooksonline/sb/billing.cgi Disallow: /cgi-audiobooksonline/sb/inv.cgi Disallow: /cgi-audiobooksonline/sb/new_options.cgi Disallow: /cgi-audiobooksonline/sb/registration.cgi Disallow: /cgi-audiobooksonline/sb/tellfriend.cgi Disallow: /*?gdftrk
-
It seems you might be asking two different questions here, Larry.
You ask which URLs are blocked by your robots file. You then answered your own question by listing the entries in your robots file which are the actual URLs that it is blocking.
If in fact what you want to know is which pages exist on your website but are not currently indexed, that's a much bigger question and requires a lot more work to answer.
There is no way Webmaster Tools can give you that answer, because if it was aware of the URL it would already be indexing it.
HOWEVER! It is possible to do it if you are willing to do some of the work on your own to collect and manipulate data using several tools. Essentially, you have to do it in three steps:
- create a list of all the URLs that Google says are indexed. (This info comes from Google's SERPs.)
- then create a separate list of all of the URLs that actually exist on your website. (This must come from a 3rd-party tool you run against your site yourself.)
- From there, you will use Excel to subtract the indexed URLs from the known URLs, leaving a list of non-indexed URLS, which is what you asked for.
I actually laid out this process step-by-step in response to an earlier question, so you can read the process there http://www.seomoz.org/q/how-to-determine-which-pages-are-not-indexed
Is that what you were looking for?
Paul
-
Okay, well the robots.txt will only be excluding robots from the folders and URLs specified and as I say, there's no way to download a list of all the URLs that Google is not indexing from webmaster tools.
If you have exact URLs in mind which you think might be getting excluded, you can test individual URLs in Google Webmaster Tools in:
Health > Blocked URLs > URLs Specify the URLs and user-agents to test against.
Beyond this, if you want to know if there are URLs that shouldn't be excluded in the folders you have specified, I would run a crawl of your website using SEOMoz' crawl test or Screaming Frog. Then sort the URLs alphabetically and make sure that all of the URLs in the folders you have excluded via robots.txt are ones that you want to exclude.
-
I want to make sure that Google is indexing all of our pages we want them to. I.E. That all of the NOT indexed URLs are valid.
-
Hi Larry
Why do you want to find those URLs out for my understanding? Are you concerned that the robots.txt is blocking URLs it shouldn't be?
As for downloading a list of URLs which aren't indexed from Google Webmaster Tools, which is what I think you would really like, this isn't possible at the moment.
-
Liz; Perhaps my post was unclear or I am misunderstanding your answer.
I want to find out the specific URLs that Google says it isn't indexing because of our Robots.txt file.
-
If you want to see if Google has indexed individual pages which are supposed to be excluded, you can check the URLs in your robots.txt using the site: command.
E.g. type the following into Google:
site:http://www.audiobooksonline.com/swish.cgi
site:http://www.audiobooksonline.com/reviews/review.php/new/
...continue for all the URLs in your robots.txtJust from searching on the last example above (site:http://www.audiobooksonline.com/reviews/review.php/new/) I can see that you have results indexed. This is probably because you added the robots.txt after it was already indexed.
To get rid of these results you need to take the culprit line out of the robots.txt, add the robots meta tag set to noindex to all pages you want removed, submit a URL removal request via webmaster tools, check it has been nonidexed then you can add the line back into the robots.txt.
This is the tag:
I hope that makes sense and is useful!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Dropped from Google?
My website www.weddingphotojournalist.co.uk appears to have been penalised by Google. I ranked fairly well for a number of venue related searches from my blog posts. Generally I'd find myself somewhere on page one or towards the top of page two. However recently I found I am nowhere to be seen for these venue searches. I still appear if I search for my name, business name and keywords in my domain name. A quick check of Yahoo and I found I am ranking very well, it is only Google who seem to have dropped me. I looked at Google webmaster tools and there are no messages or clues as to what has happened. However it does show my traffic dropping off a cliff edge on the 19th July from 850 impressions to around 60 to 70 per day. I haven't made any changes to my website recently and hadn't added any new content in July. I haven't added any new inbound links either, a search for inbound links does not show anything suspicious. Can anyone shed any light on why this might happen?
Intermediate & Advanced SEO | | weddingphotojournalist0 -
Not ranking in Google - why???
This will be a bit long, so please bare with me. I have a client in the auto parts industry who wants to rank their homepage for 13 different keywords. We are ranked first page for all keywords in Yahoo! Mexico and Bing Mexico, but not ranking first page at all in Google Mexico. My client's competitor, however, is clearly outranking my client in Google. When comparing both pages, my client's, while not 100% optimized, looks better optimized than their competitor's. Looking at all metrics using Moz, SEMRush, ahrefs, etc... my client's site looks MUCH better on all fronts. I know ranking a single homepage for more than 10 keywords is a difficult task. Our competitor is however, ranking for them, so it's not impossible. The keywords are not even that competitive according to Moz's analysis. I decided to create an optimized page for each keyword to try to rank these pages, but still my client wants the homepage to rank (again, if the competitor is ranking, then it's possible to do this) and I am afraid these pages I created could result in keyword cannibalization ultimately affecting the homepage's possibility to rank. My client had a previous SEO agency working for them and basically all they did was create fake blogs and have lots of keyword rich links directed to the site's homepage. I got the complete link profile from several tools and submitted a disavow requests for as many fishy links I could find, but that hasn't shown any results so far. Note: when looking at the competitor link profile, they have basically just a few links and no external links of real value whatsoever. My client is obviously very frustrated, and so am I. In my SEO experience, it shouldn't be such a difficult task to accomplish, however nothing seems to work even though everything seems to point that my client should rank higher. So now I'm running out of ideas regarding what to do with this site. Any insight you could provide would be SO helpful to me and my client. If needed I can provide my client's homepage URL and also their competitors homepage for you to review. i can also give you any extra information you need. Thanks a lot!
Intermediate & Advanced SEO | | EduardoRuiz0 -
Blocking poor quality content areas with robots.txt
I found an interesting discussion on seoroundtable where Barry Schwartz and others were discussing using robots.txt to block low quality content areas affected by Panda. http://www.seroundtable.com/google-farmer-advice-13090.html The article is a bit dated. I was wondering what current opinions are on this. We have some dynamically generated content pages which we tried to improve after panda. Resources have been limited and alas, they are still there. Until we can officially remove them I thought it may be a good idea to just block the entire directory. I would also remove them from my sitemaps and resubmit. There are links coming in but I could redirect the important ones (was going to do that anyway). Thoughts?
Intermediate & Advanced SEO | | Eric_edvisors0 -
Can URLs blocked with robots.txt hurt your site?
We have about 20 testing environments blocked by robots.txt, and these environments contain duplicates of our indexed content. These environments are all blocked by robots.txt, and appearing in google's index as blocked by robots.txt--can they still count against us or hurt us? I know the best practice to permanently remove these would be to use the noindex tag, but I'm wondering if we leave them they way they are if they can still hurt us.
Intermediate & Advanced SEO | | nicole.healthline0 -
URL with a # but no ! being indexed
Given that it contains a #, how come Google is able to index this URL?: http://www.rtl.nl/xl/#/home It was my understanding that Google can't handle # properly unless it's paired with a ! (hash fragment / bang). site:http://www.rtl.nl/xl/#/home returns nothing, but: site:http://www.rtl.nl/xl returns http://www.rtl.nl/xl/#/home in the result set
Intermediate & Advanced SEO | | EdelmanDigital0 -
How to Block Google Preview?
Hi, Our site is very good for Javascript-On users, however many pages are loaded via AJAX and are inaccessible with JS-off. I'm looking to make this content available with JS-off so Search Engines can access them, however we don't have the Dev time to make them 'pretty' for JS-off users. The idea is to make them accessible with JS-off, but when requested by a user with JS-on the user is forwarded to the 'pretty' AJAX version. The content (text, images, links, videos etc) is exactly the same but it's an enormous amount of effort to make the JS-off version 'pretty' and I can't justify the development time to do this. The problem is that Googlebot will index this page and show a preview of the ugly JS-off page in the preview on their results - which isn't good for the brand. Is there a way or meta code that can be used to stop the preview but still have it cached? My current options are to use the meta noarchive or "Cache-Control" content="no-cache" to ask Google to stop caching the page completely, but wanted to know if there was a better way of doing this? Any ideas guys and girls? Thanks FashionLux
Intermediate & Advanced SEO | | FashionLux0 -
Does URL format affect Keyword effectiveness for a URL?
I am looking at our site structure, and don't want to have to rebuild the way the site was linked together based on it's current folder structure so I am wondering what option would work better for our URL structure. I will uses car categories as an example of what I am talking about, but you can insert any category structure you like. For example I would like to have pages like this: www.example.com/ford-convertibles
Intermediate & Advanced SEO | | SL_SEM
www.example.com/chevy-convertibles But instead due to the site structure I will need to have pages like this: www.example.com/ford/convertibles
www.example.com/chevy/convertibles But wonder if I shouldn't do the following to ensure the proper phrase is known for the page: www.example.com/ford/ford-convertibles
www.example.com/chevy/chevy-convertibles The "/ford/ford-convertibles" just seems odd to me as a human, but I haven't seen anything on how well a keyphrase in a URL split by /'s does and I know dashes for phrases are fine. This means I am inclined to go with the"/ford/ford-convertibles"style because it keeps the keyphrase separated by dashes even if it is a bit repetitive. There will be other pages too like "/ford/top-10-fords-ever" but I don't wonder about that since it isnt "ford/ford-xxxxx" Thoughts on whether /'s in a keyphrase are as good as dashes?0 -
Does Google count links on a page or destination URLs?
Google advises that sites should have no more than around 100 links per page. I realise there is some flexibility around this which is highlighted in this article: http://www.seomoz.org/blog/questions-answers-with-googles-spam-guru One of Google's justifications for this guideline is that a page with several hundred links is likely to be less useful to a user. However, these days web pages are rarely 2 dimensional and usually include CSS drop--down navigation and tabs to different layers so that even though a user may only see 60 or so links, the source code actually contains hundreds of links. I.e., the page is actually very useful to a user. I think there is a concern amongst SEO's that if there are more than 100ish links on a page search engines may not follow links beyond those which may lead to indexing problems. This is a long winded way of getting round to my question which is, if there are 200 links in a page but many of these links point to the same page URL (let's say half the links are simply second ocurrences of other links on the page), will Google count 200 links on the page or 100?
Intermediate & Advanced SEO | | SureFire0