Should I noindex the site search page? It is generating 4% of my organic traffic.
-
I read about some recommendations to noindex the URL of the site search.
Checked in analytics that site search URL generated about 4% of my total organic search traffic (<2% of sales).My reasoning is that site search may generate duplicated content issues and may prevent the more relevant product or category pages from showing up instead.
Would you noindex this page or not?
Any thoughts?
-
One other thing to think about - do you have another method for your the bots to find/crawl your content?
We robot.txt all of our /search result pages - I agree with Everett's post they are thin content and ripe for duplication issues.
We list all content pages in sitemap.xml and have a single section to "browse content" that is paginated. We use re="next" and "prev" to help the bots walk through each page.
References
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1663744
Personally, I think Maile's video is really great and you get to see some of the cool artwork in her house.
http://googlewebmastercentral.blogspot.com/2012/03/video-about-pagination-with-relnext-and.html
Important to note that if you do setup pagination, if you add any other filters or sort options in that pagination, no follow those links and noindex those result pages as you want to have only one route through your pagination for Goog to travel through. Also, make sure each page has a unique title and description, I just add Page N to the standard blurb for each page and that usually takes care of it.
If you close one door on your search pages, you can open another one using pagination!
Cheers!
-
Since numerous search results pages are already in the index then Yes, you want to use the NoIndex tag instead of a disallow. The NoIndex tag will slowly lead to the pages being removed from the SERPs and the cache.
-
Mike, Everett,
thanks a lot. Will go ahead and noindex.Our navigation path is easy to crawl.
So I add noindex, nofollow in meta or xrobots tag?We have thousands of site search pages already in the google index, so I understand x rotobs or meta tag are preferred to using robots.txt right?
-
This was covered by Matt Cutts in a blog post way back in 2007 but the advice is still the same as Mik has pointed out. Search results could be considered to be thin content and not particularly useful to users so you can understand why Google want to avoid seeing search results in search result pages. Certainly I block all search results in robots.txt for all out sites.
You may lose 4% of your search traffic in the short term, but in the long term it could mean that you gain far more.
-
Google Webmaster Guidelines suggests you should "Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don't add much value for users coming from search engines."
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Huge Search Traffic Drop After Switching to HTTPS - No Recovery After Couple of Months
Hi In November, we have switched our website (https://www.insidermonkey.com) from HTTP to HTTPS. Initially, we noticed slight search traffic loss but later discovered it might be due to HTTPS switch. A month later we added the https version at search console, and then saw an immediate huge drop (about 25-30%). We discovered the problem might be due to poor redirection and noticed our redirects were 302s instead of 301s. To fix the problem, we implemented the 301 redirects and submitted the sitemap containing links to the old site at the new search console property (https). We've gone through points listed on the page below: https://support.google.com/webmasters/answer/6073543 We fixed the redirects to 301 Double-checked the sitemaps Made sure we had a properly installed SSL certificate (Now, we get A+ from https://www.ssllabs.com/ssltest/analyze.html?d=www.insidermonkey.com) Made sure we have no mixed-content errors (we don't have any issues at search console.) We only avoided implementing HSTS, in case we might want to switch back to HTTP.
Intermediate & Advanced SEO | | etakgoz
We had a small improvement in the following month, but our traffic did not fully recover. We wanted to test for the possibility to switch back HTTP by switching only 2 articles in our CMS to HTTP. Our traffic got worse, not only for those but for the whole site. Then we switched back those 2 articles to HTTPS again and implemented HSTS. It seems our search traffic getting worse day by day with no sign of improving. In the link below you can find the screenshot of our weekly search traffic between 1 October - 1 March. We are down from 500K weekly visitors to mere 167K last week. https://drive.google.com/open?id=1Y1TQbj_YtGG4NhLORbEWbvITUkGKUa0G Any ideas or suggestions? We are willing to get professional help as well. What is the way to find a proper consultant for such problem with relevant experience?0 -
Keyword On Page 1 Everywhere but Google (Site Specific)
Website: www.wheelchairparts.com
Intermediate & Advanced SEO | | Mike.Bean
Keyword: wheelchair parts My website is #1 or #2 on almost every search engine besides Google. Google has us bouncing between the bottom of page 2 and top of 3. However we are on page one for "wheelchairparts". I need to get a link building campaign going for this site. I feel it's more difficult for ecommerce websites and nothing seems to fit in with Rand's Mozcon 2016 Link Building talk except hacks. I need to find a flywheel. Either way, my question is what can I do other than link building to get on page 1 of Google for the term "wheelchair parts"? Thanks in advance! - Mike Bean1 -
Google Search Console - Indexed Pages
I am performing a site audit and looking at the "Index Status Report" in GSC. This shows a total of 17 URLs have been indexed. However when I look at the Sitemap report in GSC it shows 9,000 pages indexed. Also, when I perform a site: search on Google I get 24,000 results. Can anyone help me to explain these anomalies?
Intermediate & Advanced SEO | | richdan0 -
Pages are being dropped from index after a few days - AngularJS site serving "_escaped_fragment_"
My URL is: https://plentific.com/ Hi guys, About us: We are running an AngularJS SPA for property search.
Intermediate & Advanced SEO | | emre.kazan
Being an SPA and an entirely JavaScript application has proven to be an SEO nightmare, as you can imagine.
We are currently implementing the approach and serving an "escaped_fragment" version using PhantomJS.
Unfortunately, pre-rendering of the pages takes some time and even worse, on separate occasions the pre-rendering fails and the page appears to be empty. The problem: When I manually submit pages to Google, using the Fetch as Google tool, they get indexed and actually rank quite well for a few days and after that they just get dropped from the index.
Not getting lower in the rankings but totally dropped.
Even the Google cache returns a 404. The question: 1.) Could this be because of the whole serving an "escaped_fragment" version to the bots? (have in mind it is identical to the user visible one)? or 2.) Could this be because we are using an API to get our results leads to be considered "duplicate content" and that's why? And shouldn't this just result in lowering the SERP position instead of a drop? and 3.) Could this be a technical problem with us serving the content, or just Google does not trust sites served this way? Thank you very much! Pavel Velinov
SEO at Plentific.com1 -
Brand traffic moved from organic to PPC - could it affect rankings?
Hi, We've just increased a lot of branded PPC clicks for one of our clients. I've worked out that roughly 5000 clicks per month has been moved from organic search to PPC (all brand related search queries). These clicks are very cheap, but the client has expressed worries about what these clicks could do to our organic rankings. Lots of brand search in organic results proves to Google that this is a strong brand, right? So what happens when all the searches are still there, but the organic listings stop getting the clicks? Could this have a ring effect on other non-brand rankings?
Intermediate & Advanced SEO | | Inevo0 -
Site: inurl: Search
I have a site that allows for multiple filter options and some of these URL's have these have been indexed. I am in the process of adding the noindex, nofollow meta tag to these pages but I want to have an idea of how many of these URL's have been indexed so I can monitor when these have been re crawled and dropped. The structure for these URL's is: http://www.example.co.uk/category/women/shopby/brand1--brand2.html The unique identifier for the multiple filtered URL's is --, however I've tried using site:example.co.uk inurl:-- but this doesn't seem to work. I have also tried using regex but still no success. I was wondering if there is a way around this so I can get a rough idea of how many of these URL's have been indexed? Thanks
Intermediate & Advanced SEO | | GrappleAgency0 -
Traffic down after site migration
Hi! I've been working on a campaign for http://www.alwayshobbies.com/, which has seen a 35% in drop in traffic since changing ecommerce platforms. It's now been two months, but there is no sign of recovery. We are in the middle of cleaning up the link profile as part of a resubmission request, but that has been ongoing since before the migration. A lot of redirects were needed after 10k 404s appeared in Webmaster Tools after the new launch, but these have been reduced to around 500. We've been pretty thorough here, but I thought it would be worth checking in case there's something we've missed.
Intermediate & Advanced SEO | | neooptic0 -
Does using robots.txt to block pages decrease search traffic?
I know you can use robots.txt to tell search engines not to spend their resources crawling certain pages. So, if you have a section of your website that is good content, but is never updated, and you want the search engines to index new content faster, would it work to block the good, un-changed content with robots.txt? Would this content loose any search traffic if it were blocked by robots.txt? Does anyone have any available case studies?
Intermediate & Advanced SEO | | nicole.healthline0