Block all search results (dynamic) in robots.txt?
-
I know that google does not want to index "search result" pages for a lot of reasons (dup content, dynamic urls, blah blah). I recently optimized the entire IA of my sites to have search friendly urls, whcih includes search result pages. So, my search result pages changed from:
- /search?12345&productblue=true&id789
to
- /product/search/blue_widgets/womens/large
As a result, google started indexing these pages thinking they were static (no opposition from me :)), but i started getting WMT messages saying they are finding a "high number of urls being indexed" on these sites. Should I just block them altogether, or let it work itself out?
-
You can block the urls which has term "/product/search/" in them. It can be easily done by adding the following to the robots.txt
User-agent: * Disallow: /product/search/ Hope this helps...
-
As you said: The increasing number of pages indexed will dilute the link juice of the entire site.
Can you give more example? Or just a tip where to search for this kind of information?
Thank you.
-
I would agree with BK Search. You want to minimize what Google has to crawl (I know this sounds backwards) so that Google focuses on the pages that you want to rank.
Long term, why would you waste GoogleBot's time on pages that don't matter as much? What if you had an update on a more important page and GoogleBot is too busy indexing this infinite loop of pages.
At this point, I would use the noindex meta tag vs robots.txt so that google will crawl and remove all the urls from the index. Then you can drop it in later into robots.txt so it will stop crawling. Otherwise you may end up with a lot of junk in the index.
-
I might be a little different than some of these answers but I would recommend that you exclude them from getting indexed.
The reasons I would do that are that:
You know it is largely duplicate content and goes down to the same pages as your categories.
Google has stated that they would prefer to not have it indexed.
The increasing number of pages indexed will dilute the link juice of the entire site.
There is also the possibility that people using the url bar of their browser will start to increase the number of pages indexed by a large manner.
A competitor could create thousands of links to these pages and create a huge footprint that is search pages.
And finally, I like having product pages ranking highly if at all possible.
I would do this with both the robots.txt file and the GWMT exclusion on /product/search/ directory
Good Luck!
-
Hi! We're going through some of the older unanswered questions and seeing if people still have questions or if they've gone ahead and implemented something and have any lessons to share with us. Can you give an update, or mark your question as answered?
Thanks!
-
As a follow-up or further info: Its been about 5 months since the change. I do get some traffic from these indexed pages (not a ton, but enough that i would like to not block if there is no negative impact). The SE interaction seems to be confusion- they index the content, but also recognize that something may not be right. So I am wondering if anyone else has done something similar or is trying this.
Admitidly this is what i wanted the new url structure to do- as an experiment. Just looking for anyone else who has/is doing similar
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Scary bug in search console: All our pages reported as being blocked by robots.txt after https migration
We just migrated to https and created 2 days ago a new property in search console for the https domain. Webmaster Tools account for the https domain now shows for every page in our sitemap the warning: "Sitemap contains urls which are blocked by robots.txt."Also in the dashboard of the search console it shows a red triangle with warning that our root domain would be blocked by robots.txt. 1) When I test the URLs in search console robots.txt test tool all looks fine.2) When I fetch as google and render the page it renders and indexes without problem (would not if it was really blocked in robots.txt)3) We temporarily completely emptied the robots.txt, submitted it in search console and uploaded sitemap again and same warnings even though no robots.txt was online4) We run screaming frog crawl on whole website and it indicates that there is no page blocked by robots.txt5) We carefully revised the whole robots.txt and it does not contain any row that blocks relevant content on our site or our root domain. (same robots.txt was online for last decade in http version without problem)6) In big webmaster tools I could upload the sitemap and so far no error reported.7) we resubmitted sitemaps and same issue8) I see our root domain already with https in google SERPThe site is https://www.languagecourse.netSince the site has significant traffic, if google would really interpret for any reason that our site is blocked by robots we will be in serious trouble.
Intermediate & Advanced SEO | | lcourse
This is really scary, so even if it is just a bug in search console and does not affect crawling of the site, it would be great if someone from google could have a look into the reason for this since for a site owner this really can increase cortisol to unhealthy levels.Anybody ever experienced the same problem?Anybody has an idea where we could report/post this issue?0 -
Dynamically Changing pages same content
Hey there Mozzers, I have a commerce site that is dynamically adding more products in the same page when you scroll down. I have added SEO Content on the footer of the page. The url is changing when you scroll to ?page-2, ?page-3 and so on. The content stays the same even though the page is dynamically changing. Is there a way to solve that issue? Should I always use canonical pointing to the initial page thus solving the duplication but indicate rel=next and rel=prev to the other pages etc? Thanks in advance
Intermediate & Advanced SEO | | AngelosS0 -
Google Maps Integration Dynamic url
We are integrating Google Maps into a search feature on a website. Would you use the standard dynamic generated long url that appears after a search or find a way of reducing this to a shorter url. Taking into account hundreds of results. Question asked for seo purposes.
Intermediate & Advanced SEO | | jazavide0 -
Why is this page not being delivered for Google search result?
Hey folks, Figured I would try to get an experts insight on this. On google search result for BLACK TITANIUM RINGS + TITANIUM-JEWELRY.COM the page that I "think" should show up is this one: http://www.titanium-jewelry.com/black-titanium-rings.html However, it does not. Imho, this page is highly relevant. I used Rank Tracker here on seomoz.org and the page is not even in top 50 of search engine results for google. Our 'About Black Titanium Rings' page ranks #2 (http://www.titanium-jewelry.com/about-black-titanium.html) but the /black-titanium-rings.html page doesn't even rank. Any suggestions on what I could look at to figure out why this page is being penalized? We are not under a manual penalty (anymore!). Thanks! Ron
Intermediate & Advanced SEO | | yatesandcojewelers0 -
Still Going Down In Search
After signing up to SEOmoz as a pro user and sorting out all the things that the search flagged up with our website (htyp://www.whosjack.org) we jumped very slightly in search only to continue going down again. We are a news based site, we have no dup content, we have good writers and good orangic links etc I am currently very close to having to call it a day. Can anyone suggest anything at all from looking at the site or suggest a good SEO firm that I could talk to who might be able to work out the issue as I am totally at a loss as to what do do now. Any help or suggestions greatly appreciated.
Intermediate & Advanced SEO | | luwhosjack0 -
To many on page links with ABC search
My client site http://www.tshirtsubway.com has a ABC quick find selector on the homepage of the site and throughout the site and as a result is is showing an error of to many links on the SEO moz error crawls reports. I wanted some advice on improving this and perhaps looking for an alternative also looking at the current setup and asking is this wrong.
Intermediate & Advanced SEO | | onlinemediadirect0 -
XML Sitemap instruction in robots.txt = Worth doing?
Hi fellow SEO's, Just a quick one, I was reading a few guides on Bing Webmaster tools and found that you can use the robots.txt file to point crawlers/bots to your XML sitemap (they don't look for it by default). I was just wondering if it would be worth creating a robots.txt file purely for the purpose of pointing bots to the XML sitemap? I've submitted it manually to Google and Bing webmaster tools but I was thinking more for the other bots (I.e. Mozbot, the SEOmoz bot?). Any thoughts would be appreciated! 🙂 Regards, Ash
Intermediate & Advanced SEO | | AshSEO20110 -
De-indexing search results noindex, follow or noindex, nofollow
If search results were not originally blocked with robots.txt, and need to be de-indexed, is it better to use noindex, nofollow or noindex, follow?
Intermediate & Advanced SEO | | nicole.healthline0