Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
How do internal search results get indexed by Google?
-
Hi all,
Most of the URLs that are created by using the internal search function of a website/web shop shouldn't be indexed since they create duplicate content or waste crawl budget.
The standard way to go is to 'noindex, follow' these pages or sometimes to use robots.txt to disallow crawling of these pages.
The first question I have is how these pages actually would get indexed in the first place if you wouldn't use one of the options above. Crawlers follow links to index a website's pages. If a random visitor comes to your site and uses the search function, this creates a URL. There are no links leading to this URL, it is not in a sitemap, it can't be found through navigating on the website,... so how can search engines index these URLs that were generated by using an internal search function?
Second question: let's say somebody embeds a link on his website pointing to a URL from your website that was created by an internal search. Now let's assume you used robots.txt to make sure these URLs weren't indexed. This means Google won't even crawl those pages. Is it possible then that the link that was used on another website will show an empty page after a while, since Google doesn't even crawl this page?
Thanks for your thoughts guys.
-
Firstly (and I think you understand this, but for the benefit of others who find this page later): any user landing on the actual page will see its full content - robots.txt has no effect on their experience.
What I think you're asking about here is what happens if Google has previously indexed a page properly with crawling it and discovering content and then you block it in robots.txt, what will it look like in the SERPs?
My expectation is that:
- It will appear in the SERPs as it used to - with meta information / title etc - at least until Google would have recrawled it anyway, and possibly for a bit longer and some failure of Google to recrawl it after the robots.txt is updated
- Eventually, it will either drop out of the index or it may remain but with the "no information" message that shows up when a page is blocked in robots.txt from the outset yet it is indexed anyway
-
Hi Will,
Thanks for the clear answer. Both solutions do have pros and cons.
The only question left is if it would be possible that somebody gets an empty page (so without any content on it) after a while when following an external link to one of your internal search URLs when this URL would be blocked by robots.txt. Search engines wouldn't crawl these pages but still would be able to index them because they follow the link. Or does a URL and its content stay available and visible once it is generated, no matter if it is not crawlable or not indexable? This is maybe a bit out there and it would surprise me, but in this short article that I came across John Mueller says:
"One thing maybe to keep in mind here is that if these pages are blocked by robots.txt, then it could theoretically happen that someone randomly links to one of these pages. And if they do that then it could happen that we index this URL without any content because its blocked by robots.txt. So we wouldn’t know that you don’t want to have these pages actually indexed."
This could be in theory then the case for all URLs that are blocked by robots.txt but get external links.
What's your view on this?
-
I think you could legitimately take either approach to be honest. There isn't a perfect solution that avoids all possible problems so I guess it's a combination of picking which risk you are more worried about (pages getting indexed when you don't want them to, or crawl budget -- probably depends on the size of your site) and possibly considering difficulty of implementation etc.
In light of the fact that we heard about noindex,follow becoming equivalent to noindex,nofollow eventually, that does dampen the benefits of that approach, but doesn't entirely negate it.
I'm not totally sold on the phrasing in the yoast article - I wouldn't call it google "ignoring" robots.txt - it just serves a different purpose. Google is respecting the "do not crawl" directive, but that has never guaranteed that they wouldn't index a page if it got external links.
I personally might lean towards the robots.txt solution on larger sites if crawl budget were the primary concern - just because it wouldn't be the end of the world if (some of) these pages got indexed if they had external links. The only reason we were trying to keep them out was for google's benefit, so if they want to index despite the robots block, it wouldn't keep me awake at night.
Whatever route you go down, good luck!
-
Thanks for the good answers guys, really helpful! It's very clear now how these internal search URLs end up being indexed.
So 'noindex, follow' for URLs generated by internal searches is always the best solution? Even when this uses crawl budget, and blocking by robots.txt doesn't?
You could say that the biggest advantage would be the preservation of link juice when using 'noindex, follow', but John Mueller states that Google treats 'noindex, follow' the same as 'noindex, nofollow' after a while (see this article).
According to this article from Yoast, the most important reason to use 'noindex, follow' is because Google mostly takes this into account, and sometimes ignores the robots.txt.
Maybe this interesting article gives the real reason. If I understand this correctly, it would be possible that somebody gets an empty page after a while when following a link on another website to one of these internal search URLs when this URL would be blocked by robots.txt. Search engines wouldn't crawl these pages but still would be able to index them because they follow the link. Or does a URL and its content stay available and visible once it is generated, no matter if it is not crawlable or not indexable?
And an additional remark: I came across some big webshops that add a canonical tag on a search result page, pointing to the category URL to which the specific search is related to. So if you search for example for 'black laptops', the canonical version of the search result page would be example.com/laptops. If you don't index the search result pages and the links will eventually be 'nofollow', then these pages don't create any value, so what is the point of using canonical tags? On top of that, using canonicals and 'noindex' together should be avoided, according to John Mueller. Google will mostly pick rel=canonical over 'noindex', so this could be an extra reason of internal search URLs being indexed, even when they have the 'noindex' robots tag.
Thanks!
-
These are great additionals I am particularly interested in point #1. I had always suspected Google might try to predict, visit or penetrate URLs in other ways but I didn't know any of the specifics
-
This is a good answer. I'd add two small additional notes:
- Google is voracious in URL discovery even without any links to a page or any of the other mechanisms described here, we have seen instances of URLs being discovered from other sources (think: chrome usage data, crawling of common path patterns etc)
- The description at the end of the answer about robots.txt : I wouldn't describe it as Google "ignoring" the no crawl directives - they will still obey that, and won't crawl the page - it's just that they can index pages that they haven't crawled. Note that this is why you shouldn't combine robots.txt block and noindex tags - Google won't be able to crawl to discover the tags and so may still index the page.
-
Actually quite often there are links to pages of search results. Sometimes webmasters link to them when there's no decent, official page available for a series of products which they wish to promote internally (so they just write a query that captures what they want and link to that instead, from CTA buttons and promotional pop-outs and stuff)
Even when that's not the case, users often share search results with each other on forums and stuff like that. Quite often, even when you think there are 'no links' (internally or externally) to a search results page, you can end up being wrong
Sometimes you also have stuff like related search results hidden in the coding of a web-page, which don't 'activate' until a user begins typing (instant search facilities and the like). If coded badly, sometimes even when the user has entered nothing, a cloaked default list of related searches will appear in the source code or modified source code (after scripts have run) and occasionally Google can get caught up there too
Another problem that can occur is certain search results pages accidentally ending up in the XML sitemap, but that's another kettle of fish entirely
Sometimes you can have lateral indexation tags (canonical tags, hreflangs) going rogue too. Sometimes if a page exists in one language but not another, the site is programmed to 'do something clever' to find relevant content. In some cases these tags can be re-pointed to search result URLs to 'mask' the error of non-uniform multilingual deployment. Custom 404 pages can sometimes try and 'be helpful' by attempting to find similar content for end users and in some cases, end up linking to search results (which means if Google follows a 404, then ends up at the custom 404 URL - Googlebot can sometimes enter the /search area of a website)
You'd be surprised at the number of search results URLs which are linked to on the web, internally or externally
Remember: robots.txt doesn't control indexation, it only controls crawl accessibility. If Google believes a URL is popular (link signals) then they may ignore the no-crawl directive and index the URL anyway. Robots.txt isn't really the type of defense which you can '100% rely upon'
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How get google reviews on search results?
Hi, We have good google reviews. (4,8) Can we get this rating stars also on our organic search results ? Best remco
Intermediate & Advanced SEO | | remcoz0 -
Does Google Index URLs that are always 302 redirected
Hello community Due to the architecture of our site, we have a bunch of URLs that are 302 redirected to the same URL plus a query string appended to it. For example: www.example.com/hello.html is 302 redirected to www.example.com/hello.html?___store=abc The www.example.com/hello.html?___store=abc page also has a link canonical tag to www.example.com/hello.html In the above example, can www.example.com/hello.html every be Indexed, by google as I assume the googlebot will always be redirected to www.example.com/hello.html?___store=abc and will never see www.example.com/hello.html ? Thanks in advance for the help!
Intermediate & Advanced SEO | | EcommRulz0 -
How can I get Bing to index my subdomain correctly?
Hi guys, My website exists on a subdomain (i.e. https://website.subdomain.com) and is being indexed correctly on all search engines except Bing and Duck Duck Go, which list 'https://www.website.subdomain.com'. Unfortunately my subdomain isn't configured for www (the domain is out of my control), so searchers are seeing a server error when clicking on my homepage in the SERPs. I have verified the site successfully in Bing Webmaster Tools, but it still shows up incorrectly. Does anyone have any advice on how I could fix this issue? Thank you!
Intermediate & Advanced SEO | | cos20300 -
Remove URLs that 301 Redirect from Google's Index
I'm working with a client who has 301 redirected thousands of URLs from their primary subdomain to a new subdomain (these are unimportant pages with regards to link equity). These URLs are still appearing in Google's results under the primary domain, rather than the new subdomain. This is problematic because it's creating an artificial index bloat issue. These URLs make up over 90% of the URLs indexed. My experience has been that URLs that have been 301 redirected are removed from the index over time and replaced by the new destination URL. But it has been several months, close to a year even, and they're still in the index. Any recommendations on how to speed up the process of removing the 301 redirected URLs from Google's index? Will Google, or any search engine for that matter, process a noindex meta tag if the URL's been redirected?
Intermediate & Advanced SEO | | trung.ngo0 -
Why is Google Displaying this image in the search results?
Hi i'm looking at advice on how to remove or change a particular image Google is displaying in the search results. I have attached a screenshot. From the first look of it, i assumed the image would be related and be on the dealers Google+ Local Page: https://plus.google.com/118099386834104087122/about?hl=en But there are no photos. The image seems to be coming from the website. Is there a way to stop Google from displaying this image or making them display a totally different image. Thanks, Chris XzfsnUy.png
Intermediate & Advanced SEO | | Mattcarter080 -
Best way to de-index content from Google and not Bing?
We have a large quantity of URLs that we would like to de-index from Google (we are affected b Panda), but not Bing. What is the best way to go about doing this?
Intermediate & Advanced SEO | | nicole.healthline0 -
Is linking to search results bad for SEO?
If we have pages on our site that link to search results is that a bad thing? Should we set the links to "nofollow"?
Intermediate & Advanced SEO | | nicole.healthline0 -
How Google treat internal links with rel="nofollow"?
Today, I was reading about NoFollow on Wikipedia. Following statement is over my head and not able to understand with proper manner. "Google states that their engine takes "nofollow" literally and does not "follow" the link at all. However, experiments conducted by SEOs show conflicting results. These studies reveal that Google does follow the link, but does not index the linked-to page, unless it was in Google's index already for other reasons (such as other, non-nofollow links that point to the page)." It's all about indexing and ranking for specific keywords for hyperlink text during external links. I aware about that section. It may not generate in relevant result during any keyword on Google web search. But, what about internal links? I have defined rel="nofollow" attribute on too many internal links. I have archive blog post of Randfish with same subject. I read following question over there. Q. Does Google recommend the use of nofollow internally as a positive method for controlling the flow of internal link love? [In 2007] A: Yes – webmasters can feel free to use nofollow internally to help tell Googlebot which pages they want to receive link juice from other pages
Intermediate & Advanced SEO | | CommercePundit
_
(Matt's precise words were: The nofollow attribute is just a mechanism that gives webmasters the ability to modify PageRank flow at link-level granularity. Plenty of other mechanisms would also work (e.g. a link through a page that is robot.txt'ed out), but nofollow on individual links is simpler for some folks to use. There's no stigma to using nofollow, even on your own internal links; for Google, nofollow'ed links are dropped out of our link graph; we don't even use such links for discovery. By the way, the nofollow meta tag does that same thing, but at a page level.) Matt has given excellent answer on following question. [In 2011] Q: Should internal links use rel="nofollow"? A:Matt said: "I don't know how to make it more concrete than that." I use nofollow for each internal link that points to an internal page that has the meta name="robots" content="noindex" tag. Why should I waste Googlebot's ressources and those of my server if in the end the target must not be indexed? As far as I can say and since years, this does not cause any problems at all. For internal page anchors (links with the hash mark in front like "#top", the answer is "no", of course. I am still using nofollow attributes on my website. So, what is current trend? Will it require to use nofollow attribute for internal pages?0