Timely use of robots.txt and meta noindex
-
Hi,
I have been checking every possible resources for content removal, but I am still unsure on how to remove already indexed contents.
When I use robots.txt alone, the urls will remain in the index, however no crawling budget is wasted on them, But still, e.g having 100,000+ completely identical login pages within the omitted results, might not mean anything good.
When I use meta noindex alone, I keep my index clean, but also keep Googlebot busy with indexing these no-value pages.
When I use robots.txt and meta noindex together for existing content, then I suggest Google, that please ignore my content, but at the same time, I restrict him from crawling the noindex tag.
Robots.txt and url removal together still not a good solution, as I have failed to remove directories this way. It seems, that only exact urls could be removed like this.
I need a clear solution, which solves both issues (index and crawling).
What I try to do now, is the following:
I remove these directories (one at a time to test the theory) from the robots.txt file, and at the same time, I add the meta noindex tag to all these pages within the directory. The indexed pages should start decreasing (while useless page crawling increasing), and once the number of these indexed pages are low or none, then I would put the directory back to robots.txt and keep the noindex on all of the pages within this directory.
Can this work the way I imagine, or do you have a better way of doing so?
Thank you in advance for all your help.
-
Hi Deb,
Thank you for your reply.
I have never thought, that Google would crawl the robots.txt this rarely. I actually read it somewhere, which makes complete sense, that before they start crawling, they validate the process against robots.txt. This is one page only, but basically one of the most important ones.
This is now a shocking experience for me, thank you for drawing my attention to it. Anyway, I have submitted the page through 'Fetch as Google' now.
Regarding your url suggestion, I do not want them to be 404-d, at least not all of them, as for examply the login pages I still want to use, and why we have individual urls, is that because we would like our visitors to return back the page they left, before we asked them to log in. So status 200 is ok, because these pages we have for customers, but the very same pages are totally useless for Google to crawl or to index.
I hope this clarifies.
-
It seems like the latest Robots.txt file has not been cached by Google so far .. this is what it has –
So, you need to use Fetch As Google Bot and Submit this Robots.txt file to index to fix this issue at the earliest.
What concerns me that defunct URLs like this - http://www.kozelben.hu/login?r=%2Fceg%2Fdrink-island-bufe-whisky-bar-alkotas-utca-17-1123-budapest-126126%23addComment or http://www.kozelben.hu/supplier/nearby/supplierid/127493/type/geo are returning 200 Ok server side response code whereas they should be returning 404 server side response. The problem would have stopped here for once and all.
However assuming the fact that the CMS of your website does not offer you any such option [in that case, this is a bad CMS], you need to apply Meta noindex tag against them and wait patiently for search engine to catch them.
_Can’t you fix the 404 thing? Let us know. _
-
Really good article, indeed!
I have been thinking about the whole concept during the weekend, and now I have a further concept, definetely worth considering.
Thank you again, Ryan.
-
Lindsay wrote a great article on the topic which I am sure you will enjoy: http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions
-
Thank you for the further info, Ryan.
Although I see your point and can accept lots of truth in it, checking all the competitors and even the largest sites all around the web, they still keep using robots.txt (even Google does so).
I however accept noindex to be a superior solution to robots.txt and will use it for all the contents I do not want to be indexed.
I will then see, if I need and how I might need to use robots.txt. I hope, it does not hurt having a noindexed page included in robots.txt (at a later time, when it is already out of the index).
-
I understand your concern Andras. The two questions I would focus on with respect to crawl budget:
1. Is all your content being indexed properly?
2. Is your content being indexed in a timely manner?
If the answer to the above two questions is yes, I would not spend any more time thinking about crawl budget. Either way, using the "noindex" meta tag is going to be the best way to handle the issue you originally presented.
On a related note, does the content on your "useful" pages change frequently? If so, ensure you are optimizing your links (both internal and external) to these pages. When you demonstrate these are important pages to your site, Google will crawl the pages more frequently.
-
Hi Ryan,
Thank you for your reply.
The only worry I have regarding the crawl budget, that I currently have three times more indexed pages than useful pages, due to the issues I have mentioned earlier.
It is true, that I do not have daily content updates on all of my useful pages, however I have thought that Google allocates individual crawling budget to all sites, based on the value he assigns to them.
I just want this budget to be spent wisely, and not causing my useful pages to be crawled less frequently, due to crawling no-value (but noindexed) content instead.
-
Hi Andras,
The first thing to know is a general rule....the best robots.txt file is a blank one. There is almost always a better method of managing a situation without using robots.txt. There are numerous reasons, one of which is search engines do not always see the robots.txt file.
Regarding the noindex meta tag, that is the proper solution. I understand your concern over crawl budget, but I suggest in this instance, your concerns are not warranted. It is a waste of crawl budget to have search engines spend extra time due to slow servers, bad code, thin content, etc. If you have pages which should not be indexed, adding the noindex tag is likely the best solution.
Without being familiar with your site, it is not possible to offer a definitive answer, but generally speaking this response should be accurate. Keep in mind many sites have millions of pages, and Google has the ability to crawl the entire site each month.
-
Can you show us examples of URLs that are causing you trouble? That would be easier for us to provide a solution.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Using the £ sign in meta title
Is it a bad idea to include a £sign in my meta title? It currently has a price incentive in it. Does Google not like this from organic traffic titles/ meta descriptions? Thanks,
Technical SEO | | EdLongley0 -
Google Webmaster Tools is saying "Sitemap contains urls which are blocked by robots.txt" after Https move...
Hi Everyone, I really don't see anything wrong with our robots.txt file after our https move that just happened, but Google says all URLs are blocked. The only change I know we need to make is changing the sitemap url to https. Anything you all see wrong with this robots.txt file? robots.txt This file is to prevent the crawling and indexing of certain parts of your site by web crawlers and spiders run by sites like Yahoo! and Google. By telling these "robots" where not to go on your site, you save bandwidth and server resources. This file will be ignored unless it is at the root of your host: Used: http://example.com/robots.txt Ignored: http://example.com/site/robots.txt For more information about the robots.txt standard, see: http://www.robotstxt.org/wc/robots.html For syntax checking, see: http://www.sxw.org.uk/computing/robots/check.html Website Sitemap Sitemap: http://www.bestpricenutrition.com/sitemap.xml Crawlers Setup User-agent: * Allowable Index Allow: /*?p=
Technical SEO | | vetofunk
Allow: /index.php/blog/
Allow: /catalog/seo_sitemap/category/ Directories Disallow: /404/
Disallow: /app/
Disallow: /cgi-bin/
Disallow: /downloader/
Disallow: /includes/
Disallow: /lib/
Disallow: /magento/
Disallow: /pkginfo/
Disallow: /report/
Disallow: /stats/
Disallow: /var/ Paths (clean URLs) Disallow: /index.php/
Disallow: /catalog/product_compare/
Disallow: /catalog/category/view/
Disallow: /catalog/product/view/
Disallow: /catalogsearch/
Disallow: /checkout/
Disallow: /control/
Disallow: /contacts/
Disallow: /customer/
Disallow: /customize/
Disallow: /newsletter/
Disallow: /poll/
Disallow: /review/
Disallow: /sendfriend/
Disallow: /tag/
Disallow: /wishlist/
Disallow: /aitmanufacturers/index/view/
Disallow: /blog/tag/
Disallow: /advancedreviews/abuse/reportajax/
Disallow: /advancedreviews/ajaxproduct/
Disallow: /advancedreviews/proscons/checkbyproscons/
Disallow: /catalog/product/gallery/
Disallow: /productquestions/index/ajaxform/ Files Disallow: /cron.php
Disallow: /cron.sh
Disallow: /error_log
Disallow: /install.php
Disallow: /LICENSE.html
Disallow: /LICENSE.txt
Disallow: /LICENSE_AFL.txt
Disallow: /STATUS.txt Paths (no clean URLs) Disallow: /.php$
Disallow: /?SID=
disallow: /?cat=
disallow: /?price=
disallow: /?flavor=
disallow: /?dir=
disallow: /?mode=
disallow: /?list=
disallow: /?limit=5
disallow: /?limit=10
disallow: /?limit=15
disallow: /?limit=20
disallow: /*?limit=250 -
Robots User-agent Query
Am I correct in saying that the allow/disallow is only applied to msnbot_mobile? mobile robots file User-agent: Googlebot-Mobile User-agent: YahooSeeker/M1A1-R2D2 User-agent: MSNBOT_Mobile Allow: / Disallow: /1 Disallow: /2/ Disallow: /3 Disallow: /4/
Technical SEO | | ThomasHarvey1 -
Block Domain in robots.txt
Hi. We had some URLs that were indexed in Google from a www1-subdomain. We have now disabled the URLs (returning a 404 - for other reasons we cannot do a redirect from www1 to www) and blocked via robots.txt. But the amount of indexed pages keeps increasing (for 2 weeks now). Unfortunately, I cannot install Webmaster Tools for this subdomain to tell Google to back off... Any ideas why this could be and whether it's normal? I can send you more domain infos by personal message if you want to have a look at it.
Technical SEO | | zeepartner0 -
Magento CMS Page Meta Titles?
Hey All, I'm experiencing a bit of a problem with creating a custom, separate Meta Title for CMS pages in Magento 1.6.1.0. I know this can be added through _CMS => Page => Page Information => Page Title, _but this method presents issues, as a length of 70 characters is a little too long for breadcrumbs. I would like a way to add a second specific meta title field that overrides this first Page Title field_. _ I did find something that does this in Magento Connect for around $50.00USD, but this seems like something that should be easy and free to do. Any suggestions as to a workaround or alternative free plugin would be most welcome! Thanks!
Technical SEO | | G2W0 -
What hosting companies do you use & do you use dedicated servers
I am hoping the community of semoz will help me in deciding what hosting company i should use as there are hundreds of them. I have asked previously about dedicated servers but was shocked to have only received one responce. Recently i have been having nothing but problems with my hosting company so now i am trying to find a UK hosting company that can offer a dedicated server. I would be grateful if people could let me know what companies they use for their sites and if they use managed hosting companies.
Technical SEO | | ClaireH-1848860 -
Noindex, follow duplicate pages
I have a series of websites that all feature a library of the same content. These pages don't make up the majority of the sites content, maybe 10-15% of the total pages. Most of our clients won't take the time to rewrite the content, but it's valuable to their site. So I decided to noindex, follow all of the pages. Outside of convincing them all to write their own versions of the content, is this the best method? I could also block the pages with robots.txt, but then I couldn't pass any link juice through the pages. Any thoughts?
Technical SEO | | vforvinnie0 -
Robots.txt and 301
Hi Mozzers, Can you answer something for me please. I have a client and they have 301 re-directed the homepage '/' to '/home.aspx'. Therefore all or most of the linkjuice is being passed which is great. They have also marked the '/' as nofollow / noindex in the Robots.txt file so its not being crawled. My question is if the '/' is being denied access to the robots is it still passing on the authority for the links that go into this page? It is a 301 and not 302 so it would work under normal circumstances but as the page is not being crawled do I need to change the Robots.txt to crawl the '/'? Thanks Bush
Technical SEO | | Bush_JSM0