Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How do internal search results get indexed by Google?
-
Hi all,
Most of the URLs that are created by using the internal search function of a website/web shop shouldn't be indexed since they create duplicate content or waste crawl budget.
The standard way to go is to 'noindex, follow' these pages or sometimes to use robots.txt to disallow crawling of these pages.
The first question I have is how these pages actually would get indexed in the first place if you wouldn't use one of the options above. Crawlers follow links to index a website's pages. If a random visitor comes to your site and uses the search function, this creates a URL. There are no links leading to this URL, it is not in a sitemap, it can't be found through navigating on the website,... so how can search engines index these URLs that were generated by using an internal search function?
Second question: let's say somebody embeds a link on his website pointing to a URL from your website that was created by an internal search. Now let's assume you used robots.txt to make sure these URLs weren't indexed. This means Google won't even crawl those pages. Is it possible then that the link that was used on another website will show an empty page after a while, since Google doesn't even crawl this page?
Thanks for your thoughts guys.
-
Firstly (and I think you understand this, but for the benefit of others who find this page later): any user landing on the actual page will see its full content - robots.txt has no effect on their experience.
What I think you're asking about here is what happens if Google has previously indexed a page properly with crawling it and discovering content and then you block it in robots.txt, what will it look like in the SERPs?
My expectation is that:
- It will appear in the SERPs as it used to - with meta information / title etc - at least until Google would have recrawled it anyway, and possibly for a bit longer and some failure of Google to recrawl it after the robots.txt is updated
- Eventually, it will either drop out of the index or it may remain but with the "no information" message that shows up when a page is blocked in robots.txt from the outset yet it is indexed anyway
-
Hi Will,
Thanks for the clear answer. Both solutions do have pros and cons.
The only question left is if it would be possible that somebody gets an empty page (so without any content on it) after a while when following an external link to one of your internal search URLs when this URL would be blocked by robots.txt. Search engines wouldn't crawl these pages but still would be able to index them because they follow the link. Or does a URL and its content stay available and visible once it is generated, no matter if it is not crawlable or not indexable? This is maybe a bit out there and it would surprise me, but in this short article that I came across John Mueller says:
"One thing maybe to keep in mind here is that if these pages are blocked by robots.txt, then it could theoretically happen that someone randomly links to one of these pages. And if they do that then it could happen that we index this URL without any content because its blocked by robots.txt. So we wouldn’t know that you don’t want to have these pages actually indexed."
This could be in theory then the case for all URLs that are blocked by robots.txt but get external links.
What's your view on this?
-
I think you could legitimately take either approach to be honest. There isn't a perfect solution that avoids all possible problems so I guess it's a combination of picking which risk you are more worried about (pages getting indexed when you don't want them to, or crawl budget -- probably depends on the size of your site) and possibly considering difficulty of implementation etc.
In light of the fact that we heard about noindex,follow becoming equivalent to noindex,nofollow eventually, that does dampen the benefits of that approach, but doesn't entirely negate it.
I'm not totally sold on the phrasing in the yoast article - I wouldn't call it google "ignoring" robots.txt - it just serves a different purpose. Google is respecting the "do not crawl" directive, but that has never guaranteed that they wouldn't index a page if it got external links.
I personally might lean towards the robots.txt solution on larger sites if crawl budget were the primary concern - just because it wouldn't be the end of the world if (some of) these pages got indexed if they had external links. The only reason we were trying to keep them out was for google's benefit, so if they want to index despite the robots block, it wouldn't keep me awake at night.
Whatever route you go down, good luck!
-
Thanks for the good answers guys, really helpful! It's very clear now how these internal search URLs end up being indexed.
So 'noindex, follow' for URLs generated by internal searches is always the best solution? Even when this uses crawl budget, and blocking by robots.txt doesn't?
You could say that the biggest advantage would be the preservation of link juice when using 'noindex, follow', but John Mueller states that Google treats 'noindex, follow' the same as 'noindex, nofollow' after a while (see this article).
According to this article from Yoast, the most important reason to use 'noindex, follow' is because Google mostly takes this into account, and sometimes ignores the robots.txt.
Maybe this interesting article gives the real reason. If I understand this correctly, it would be possible that somebody gets an empty page after a while when following a link on another website to one of these internal search URLs when this URL would be blocked by robots.txt. Search engines wouldn't crawl these pages but still would be able to index them because they follow the link. Or does a URL and its content stay available and visible once it is generated, no matter if it is not crawlable or not indexable?
And an additional remark: I came across some big webshops that add a canonical tag on a search result page, pointing to the category URL to which the specific search is related to. So if you search for example for 'black laptops', the canonical version of the search result page would be example.com/laptops. If you don't index the search result pages and the links will eventually be 'nofollow', then these pages don't create any value, so what is the point of using canonical tags? On top of that, using canonicals and 'noindex' together should be avoided, according to John Mueller. Google will mostly pick rel=canonical over 'noindex', so this could be an extra reason of internal search URLs being indexed, even when they have the 'noindex' robots tag.
Thanks!
-
These are great additionals I am particularly interested in point #1. I had always suspected Google might try to predict, visit or penetrate URLs in other ways but I didn't know any of the specifics
-
This is a good answer. I'd add two small additional notes:
- Google is voracious in URL discovery even without any links to a page or any of the other mechanisms described here, we have seen instances of URLs being discovered from other sources (think: chrome usage data, crawling of common path patterns etc)
- The description at the end of the answer about robots.txt : I wouldn't describe it as Google "ignoring" the no crawl directives - they will still obey that, and won't crawl the page - it's just that they can index pages that they haven't crawled. Note that this is why you shouldn't combine robots.txt block and noindex tags - Google won't be able to crawl to discover the tags and so may still index the page.
-
Actually quite often there are links to pages of search results. Sometimes webmasters link to them when there's no decent, official page available for a series of products which they wish to promote internally (so they just write a query that captures what they want and link to that instead, from CTA buttons and promotional pop-outs and stuff)
Even when that's not the case, users often share search results with each other on forums and stuff like that. Quite often, even when you think there are 'no links' (internally or externally) to a search results page, you can end up being wrong
Sometimes you also have stuff like related search results hidden in the coding of a web-page, which don't 'activate' until a user begins typing (instant search facilities and the like). If coded badly, sometimes even when the user has entered nothing, a cloaked default list of related searches will appear in the source code or modified source code (after scripts have run) and occasionally Google can get caught up there too
Another problem that can occur is certain search results pages accidentally ending up in the XML sitemap, but that's another kettle of fish entirely
Sometimes you can have lateral indexation tags (canonical tags, hreflangs) going rogue too. Sometimes if a page exists in one language but not another, the site is programmed to 'do something clever' to find relevant content. In some cases these tags can be re-pointed to search result URLs to 'mask' the error of non-uniform multilingual deployment. Custom 404 pages can sometimes try and 'be helpful' by attempting to find similar content for end users and in some cases, end up linking to search results (which means if Google follows a 404, then ends up at the custom 404 URL - Googlebot can sometimes enter the /search area of a website)
You'd be surprised at the number of search results URLs which are linked to on the web, internally or externally
Remember: robots.txt doesn't control indexation, it only controls crawl accessibility. If Google believes a URL is popular (link signals) then they may ignore the no-crawl directive and index the URL anyway. Robots.txt isn't really the type of defense which you can '100% rely upon'
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Suddenly keywords Disappeared from Google Search Results
Hello Guys Please help me, suddenly all of my site's keywords are disappeared from google search result, most of keywords are no.1 on google but today after 6pm i see the traffic decreasing and when i search my keywords there is no any keywords in search result. Only homepage keyword is showing. Please Help what is Happening with me.
Intermediate & Advanced SEO | | mianazeem4180 -
My product category pages are not being indexed on google can someone help?
My website has been indexed on google and all of its pages can be found on google except for the product category pages - which are where we want our traffic heading to, so this is a big problem for us. Our website is www.skirtinguk.com And an example of a page that isn't being indexed is https://www.skirtinguk.com/product-category/mdf-skirting-board/
Intermediate & Advanced SEO | | chelseaskirtinguk0 -
How to Get Rid of Dates Shown In Google Search Results
When I enter "Site: URL" to check what a search how Google displays search result, a date appears at the very front. This takes away several characters, really valuable real estate. How can I stop Google from displaying these dates? There are certain Wordpress plugins like "WP Date Remover" however the seem to only apply to blog posts. Dates are appearing on results on all my Wordpress pages. Is there an internal setting in Wordpress that will allow me to remove dates for these non blogpost pages?
Intermediate & Advanced SEO | | Kingalan11 -
Mass Removal Request from Google Index
Hi, I am trying to cleanse a news website. When this website was first made, the people that set it up copied all kinds of articles they had as a newspaper, including tests, internal communication, and drafts. This site has lots of junk, but this kind of junk was on the initial backup, aka before 1st-June-2012. So, removing all mixed content prior to that date, we can have pure articles starting June 1st, 2012! Therefore My dynamic sitemap now contains only articles with release date between 1st-June-2012 and now Any article that has release date prior to 1st-June-2012 returns a custom 404 page with "noindex" metatag, instead of the actual content of the article. The question is how I can remove from the google index all this junk as fast as possible that is not on the site anymore, but still appears in google results? I know that for individual URLs I need to request removal from this link
Intermediate & Advanced SEO | | ioannisa
https://www.google.com/webmasters/tools/removals The problem is doing this in bulk, as there are tens of thousands of URLs I want to remove. Should I put the articles back to the sitemap so the search engines crawl the sitemap and see all the 404? I believe this is very wrong. As far as I know this will cause problems because search engines will try to access non existent content that is declared as existent by the sitemap, and return errors on the webmasters tools. Should I submit a DELETED ITEMS SITEMAP using the <expires>tag? I think this is for custom search engines only, and not for the generic google search engine.
https://developers.google.com/custom-search/docs/indexing#on-demand-indexing</expires> The site unfortunatelly doesn't use any kind of "folder" hierarchy in its URLs, but instead the ugly GET params, and a kind of folder based pattern is impossible since all articles (removed junk and actual articles) are of the form:
http://www.example.com/docid=123456 So, how can I bulk remove from the google index all the junk... relatively fast?0 -
Google indexed wrong pages of my website.
When I google site:www.ayurjeewan.com, after 8 pages, google shows Slider and shop pages. Which I don't want to be indexed. How can I get rid of these pages?
Intermediate & Advanced SEO | | bondhoward0 -
Remove URLs that 301 Redirect from Google's Index
I'm working with a client who has 301 redirected thousands of URLs from their primary subdomain to a new subdomain (these are unimportant pages with regards to link equity). These URLs are still appearing in Google's results under the primary domain, rather than the new subdomain. This is problematic because it's creating an artificial index bloat issue. These URLs make up over 90% of the URLs indexed. My experience has been that URLs that have been 301 redirected are removed from the index over time and replaced by the new destination URL. But it has been several months, close to a year even, and they're still in the index. Any recommendations on how to speed up the process of removing the 301 redirected URLs from Google's index? Will Google, or any search engine for that matter, process a noindex meta tag if the URL's been redirected?
Intermediate & Advanced SEO | | trung.ngo0 -
Should I prevent Google from indexing blog tag and category pages?
I am working on a website that has a regularly updated Wordpress blog and am unsure whether or not the category and tag pages should be indexable. The blog posts are often outranked by the tag and category pages and they are ultimately leaving me with a duplicate content issue. With this in mind, I assumed that the best thing to do would be to remove the tag and category pages from the index, but after speaking to someone else about the issue, I am no longer sure. I have tried researching online, but there isn't anything that provided any further information. Please can anyone with any experience of dealing with issues like this or with any knowledge of the topic help me to resolve this annoying issue. Any input will be greatly appreciated. Thanks Paul
Intermediate & Advanced SEO | | PaulRogers0 -
Getting Google to Correct a Misspelled Site Link...Help!
My company website recently got its site links in google search... WooHoo! However, when you type TECHeGO into Google Search one of the links is spelled incorrectly. Instead of 'CONversion Optimization' its 'COversion Optimization'. At first I thought there was a misspelling on that page somewhere but there is not and have come to the conclusion that Google has made a mistake. I know that I can block the page in webmaster tools (No Thanks) but how in the crap can I get them to correct the spelling when no one really knows how to get them to appear in the first place? Riddle Me That Folks! sitelink.jpg
Intermediate & Advanced SEO | | TECHeGO0