How should I handle URL's created by an internal search engine?
-
Hi,
I'm aware that internal search result URL's (www.example.co.uk/catalogsearch/result/?q=searchterm) should ideally be blocked using the robots.txt file. Unfortunately the damage has already been done and a large number of internal search result URL's have already been created and indexed by Google. I have double checked and these pages only account for approximately 1.5% of traffic per month.
Is there a way I can remove the internal search URL's that have already been indexed and then stop this from happening in the future, I presume the last part would be to disallow /catalogsearch/ in the robots.txt file.
Thanks
-
Basic cleanup
From a procedural standpoint, you want to first add the noindex meta tag to the search results first. Google has to see that tag to then act on it and remove the URLs. You can also enter some of the URLs into the Webmaster tools removal tool.
Next you would want to add /catalogsearch/ to robots.txt once you see all the pages getting out of the index.
Advanced cleanup
If any of these search result URLs are ranking and are landing pages in Google. You may want to consider 301 redirecting those pages to the properly related category pages.
My 2 cents. I only use the GWT parameter handler on parameters that I have to show to the search engines. I otherwise try to hide all those URLs from Google to help with crawl efficiency.
Note that it is really important that you do the work to find what pages/urls Google has cataloged to make sure you dont delete a page that is actually generating some traffic for you. A landing page report from GA would help with this.
Cheers!
-
On top of Lesley's recommendations, both google and bing have url parameter exclusion options in webmaster tools.
-
I am guessing that you are using a system that templates pages and maybe adds a query string after the search, something like search.php?caws+cars. I would set in the header of all of the pages that use the search template a noindex, nofollow. Then I would also add it to the robots text as well to disregard the search pages. They will start dropping out of the results pages in about a week or so.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I Add Location to ALL of My Client's URLs?
Hi Mozzers, My first Moz post! Yay! I'm excited to join the squad 🙂 My client is a full service entertainment company serving the Washington DC Metro area (DC, MD & VA) and offers a host of services for those wishing to throw events/parties. Think DJs for weddings, cool photo booths, ballroom lighting etc. I'm wondering what the right URL structure should be. I've noticed that some of our competitors do put DC area keywords in their URLs, but with the moves of SERPs to focus a lot more on quality over keyword density, I'm wondering if we should focus on location based keywords in traditional areas on page (e.g. title tags, headers, metas, content etc) instead of having keywords in the URLs alongside the traditional areas I just mentioned. So, on every product related page should we do something like: example.com/weddings/planners-washington-dc-md-va
Intermediate & Advanced SEO | | pdrama231
example.com/weddings/djs-washington-dc-md-va
example.com/weddings/ballroom-lighting-washington-dc-md-va OR example.com/weddings/planners
example.com/weddings/djs
example.com/weddings/ballroom-lighting In both cases, we'd put the necessary location based keywords in the proper places on-page. If we follow the location-in-URL tactic, we'd use DC area terms in all subsequent product page URLs as well. Essentially, every page outside of the home page would have a location in it. Thoughts? Thank you!!0 -
Which search engines should we submit our sitemap to?
Other than Google and Bing, which search engines should we submit our sitemap to?
Intermediate & Advanced SEO | | NicheSocial0 -
How search engines look at collapse content in mobile while on desktop it open by default?
Hello everyone!
Intermediate & Advanced SEO | | Roi_Bar
To have a mobile friendly UX we chose to collapse some of the page content.
On the desktop it is in open mode by default and user can see the whole content.
Does the search engines see the content even if it's collapse? is the collapse mode on the mobile only can hurt us with SERP ranking? okgF0pX 1LU6utU1 -
Blocking Certain Site Parameters from Google's Index - Please Help
Hello, So we recently used Google Webmaster Tools in an attempt to block certain parameters on our site from showing up in Google's index. One of our site parameters is essentially for user location and accounts for over 500,000 URLs. This parameter does not change page content in any way, and there is no need for Google to index it. We edited the parameter in GWT to tell Google that it does not change site content and to not index it. However, after two weeks, all of these URLs are still definitely getting indexed. Why? Maybe there's something we're missing here. Perhaps there is another way to do this more effectively. Has anyone else ran into this problem? The path we used to implement this action:
Intermediate & Advanced SEO | | Jbake
Google Webmaster Tools > Crawl > URL Parameters Thank you in advance for your help!0 -
Created the content, yet we don't rank for it. Toxic website?
Hey everyone, I'm beginning to think our site is toxic i.e. it'll never rank properly again irrespective of what we do. I recently published some data (2 months ago) in an interactive visual called the "iPhone 5S Price Index". I outreached and got thousands of links from sites including Forbes, Gizmodo (various international versions), Washington Post, The Guardian, NY Times, etc etc. All of these results dominate the Google rankings, all with links pointing to us. YET, we're no where to be seen. What incentive are Google giving content creators, like me, to continue producing content that is obviously popular if we can't even rank for it? The traffic we received was fantastic. In one day the traffic was 40 times our average, which made me smile like a Cheshire Cat from ear-to-ear but we need to improve our rankings overall otherwise the value to us is lost. The traffic wasn't there to buy our service, they were there to see the graphic. Hopefully our brand exposure leads to future sales, but it's a pittance compared to our previous rankings income. I've had this type of success 3 times in the last few months on this site alone. Yet nothing changes. We suffered from a loss of rankings in September 2012, fighting ever since to get it back. Now I'm losing hope it is even possible. Does anyone know why our site wouldn't rank when we're undeniable the source that created the work? Also, why wouldn't the increase in domain authority (which has jumped about 10 points according to OSE) have a knock on effect for the rest of our keywords - or even let us appear within the top 100 for ones we obviously serve? We do Real Company Shit - and we're good at it. But I need these rankings back. It's driving me nuts. Thanks.
Intermediate & Advanced SEO | | purpleindigo0 -
Duplicate site (disaster recovery) being crawled and creating two indexed search results
I have a primary domain, toptable.co.uk, and a disaster recovery site for this primary domain named uk-www.gtm.opentable.com. In the event of a disaster, toptable.co.uk would get CNAMEd (DNS alias) to the .gtm site. Naturally the .gtm disaster recover domian is an exact match to the toptable.co.uk domain. Unfortunately, Google has crawled the uk-www.gtm.opentable site, and it's showing up in search results. In most cases the gtm urls don't get redirected to toptable they actually appear as an entirely separate domain to the user. The strong feeling is that this duplicate content is hurting toptable.co.uk, especially as .gtm.ot is part of the .opentable.com domain which has significant authority. So we need a way of stopping Google from crawling gtm. There seem to be two potential fixes. Which is best for this case? use the robots.txt to block Google from crawling the .gtm site 2) canonicalize the the gtm urls to toptable.co.uk In general Google seems to recommend a canonical change but in this special case it seems robot.txt change could be best. Thanks in advance to the SEOmoz community!
Intermediate & Advanced SEO | | OpenTable0 -
Url structure for multiple search filters applied to products
We have a product catalog with several hundred similar products. Our list of products allows you apply filters to hone your search, so that in fact there are over 150,000 different individual searches you could come up with on this page. Some of these searches are relevant to our SEO strategy, but most are not. Right now (for the most part) we save the state of each search with the fragment of the URL, or in other words in a way that isn't indexed by the search engines. The URL (without hashes) ranks very well in Google for our one main keyword. At the moment, Google doesn't recognize the variety of content possible on this page. An example is: http://www.example.com/main-keyword.html#style=vintage&color=blue&season=spring We're moving towards a more indexable URL structure and one that could potentially save the state of all 150,000 searches in a way that Google could read. An example would be: http://www.example.com/main-keyword/vintage/blue/spring/ I worry, though, that giving so many options in our URL will confuse Google and make a lot of duplicate content. After all, we only have a few hundred products and inevitably many of the searches will look pretty similar. Also, I worry about losing ground on the main http://www.example.com/main-keyword.html page, when it's ranking so well at the moment. So I guess the questions are: Is there such a think as having URLs be too specific? Should we noindex or set rel=canonical on the pages whose keywords are nested too deep? Will our main keyword's page suffer when it has to share all the inbound links with these other, more specific searches?
Intermediate & Advanced SEO | | boxcarpress0 -
Search Engine Blocked by robots.txt for Dynamic URLs
Today, I was checking crawl diagnostics for my website. I found warning for search engine blocked by robots.txt I have added following syntax to robots.txt file for all dynamic URLs. Disallow: /*?osCsid Disallow: /*?q= Disallow: /*?dir= Disallow: /*?p= Disallow: /*?limit= Disallow: /*review-form Dynamic URLs are as follow. http://www.vistastores.com/bar-stools?dir=desc&order=position http://www.vistastores.com/bathroom-lighting?p=2 and many more... So, Why should it shows me warning for this? Does it really matter or any other solution for these kind of dynamic URLs.
Intermediate & Advanced SEO | | CommercePundit0