Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Sanity Check: NoIndexing a Boatload of URLs
-
Hi,
I'm working with a Shopify site that has about 10x more URLs in Google's index than it really ought to. This equals thousands of urls bloating the index. Shopify makes it super easy to make endless new collections of products, where none of the new collections has any new content... just a new mix of products. Over time, this makes for a ton of duplicate content.
My response, aside from making other new/unique content, is to select some choice collections with KW/topic opportunities in organic and add unique content to those pages. At the same time, noindexing the other 90% of excess collections pages.
The thing is there's evidently no method that I could find of just uploading a list of urls to Shopify to tag noindex. And, it's too time consuming to do this one url at a time, so I wrote a little script to add a noindex tag (not nofollow) to pages that share various identical title tags, since many of them do. This saves some time, but I have to be careful to not inadvertently noindex a page I want to keep.
Here are my questions:
-
Is this what you would do? To me it seems a little crazy that I have to do this by title tag, although faster than one at a time.
-
Would you follow it up with a deindex request (one url at a time) with Google or just let Google figure it out over time?
-
Are there any potential negative side effects from noindexing 90% of what Google is already aware of?
-
Any additional ideas?
Thanks! Best... Mike
-
-
Hi Michael
The problem you have is the very low value content that exists on all of those pages and the complete impossibility of writing any unique Titles, Descriptions and content. There are just too many of them.
With a footwear client of mine I no indexed a huge slug of tags taking the page count down by about 25% - we saw an immediate 22% increase in organic traffic in the first month. (March 18th 2017 - April 17th 2017) the duplicates were all size and colour related. Since canonicalising (I'm English lol) more content and taking the site from 25,000 pages to around 15,000 the site is now 76% ahead of last year for organics. This is real measurable change.
Now the arguments:
Canonicalisation
How are you going to canonicalise 10,000+ pages ? unless you have some kind of magic bullet you are not going to be able to but lets look at the logic.
Say we have a page of Widgets (brand) and they come in 7 sizes. When the range is fully in stock all of the brand/size pages will be identical to the brand page, apart from the title & description. So it would make sense to canonicalise back to the brand. Even when sizes started to run out, all of the sizes will be on the brand page. So size is a subset of the brand page.
Similar but not the same for colour. If colour is a tag then every colour sorted page will be on the brand page. So really they are the same page - just a slimmer selection. Now I accept that the brand page will contain all colours as it did all sizes but the similarity is so great - 95 % of the content being the same apart from the colour, that it makes sense to call them the same.
So for me Canonicalisation would be the way to go but it's just not possible as there are too many of them.
Noindex
The upside of noindex is that it is generally easier to put the noindex tag on the page as there is no URL to tag. The downside is that the page is then not indexed in Google so you lose a little juice - I would argue by the way that the chances of being found in Google for a size page is extremely slim, less than 2% of visits came from size pages before we junked them and most of those were from a newsletter so reality is <1% not worth bothering about You could leave off the nofollow so that Google crawls through all of the links on the pages - the better option.
Considering your problem and having experience of a number of sites with the same problem Noindex is your solution.
I hope that helps
Kind Regards
Nigel - Carousel Projects.
-
Hi Chris & Nigel,
Thank you for the considered responses. Good points about canonicalizing. A part I find frustrating is that the shared title tag across dozens or hundreds of pages will be across many different products/groups of products. So, the title tag is not a solid way to group canonicals.
Since the url patterns vary, I don't see how I could group these by which dozens or hundreds canonicalize to which one page, let alone make the change in Shopify other than one page at a time. My understanding is that this title tag manipulation is the only handle Shopify gives for making these bulk changes.
Gah!
So, here are my follow up questions:
-
How big of a negative is this in it's as-is state and how much better will noindexing most of the 90% make it Google Organic-wise? I ask because even the BS title tag to noindex project is a huge time suck.
-
If more is ever revealed about how to more efficiently group and canonicalize in Shopify, would adding the canonical after noindexing capture that lost authority later or would the previous noindex have irretrievably lost that?
-
Given all that, would you continue as I am?
Thanks! Best... Mike
-
-
Hi Mike
I see this a lot with sites that have a ton of tag groups. One site I am working on has 50,000 pages in Google caused by tags appending themselves to every version of a URL, the site only has 400 products. Example
Site/size-4
Site/womens/size-4
Site/womens/boots/size-4
Site/womens/boots/ankle/size-4
Site/womens/clarks/boots/size-4Etc etc - If there are other tags like colour and features, this can cause a huge 3 dimensional matrix of additional pages that can slow down the crawl of the site - Google may not crawl all of the site as a result.
If it's possible to canonicalse then that is the best option as juice and follows are retained - very often it would be the page with the tag lopped off that the tag should cite.
In extreme circumstances I would consider noindexing the pages as they offer very skinny content and rubbish Meta because it's impossible to handle them individually. I have seen significant improvement in organics as a result.
Personally I don't think it's enough to simply leave Google to figure it out although I have seen some sites with very high DA get away with it.
To be honest I am pretty shocked that Shopify doesn't have a feature to cope with this
Regards
Nigel
Carousel Projects.
-
Hello Michael Johnson and Mozzers,
I have seen Shopify do this a few times, though I do not have clients on that particular platform at the moment. It is frustrating. You're right to want to resolve this issue. Between duplicate content, authority conflicts, and an inflated crawl budget, one issue or another is bound to hold back site performance.
Is this what you would do? Not immediately, no. I want to see those pages canonicalized. That way, your preferred pages get all the juice back from their respective canonical link. Is this an option for you?
**Deindex request... and s_ide effects?**_ Canonical tags would make these part irrelevant (yay less work!). To be thorough though: I'd let Google figure it out unless you have strong evidence your crawl budget is maxed. And I don't see any negative side effects from noindexing duplicate content. If worse comes to worse, you have a good plan.
Shape that content,
CopyChrisSEO and the Vizergy Team
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does google ignore ? in url?
Hi Guys, Have a site which ends ?v=6cc98ba2045f for all its URLs. Example: https://domain.com/products/cashmere/robes/?v=6cc98ba2045f Just wondering does Google ignore what is after the ?. Also any ideas what that is? Cheers.
Intermediate & Advanced SEO | | CarolynSC0 -
URL Rewriting Best Practices
Hey Moz! I’m getting ready to implement URL rewrites on my website to improve site structure/URL readability. More specifically I want to: Improve our website structure by removing redundant directories. Replace underscores with dashes and remove file extensions for our URLs. Please see my example below: Old structure: http://www.widgets.com/widgets/commercial-widgets/small_blue_widget.htm New structure: https://www.widgets.com/commercial-widgets/small-blue-widget I've read several URL rewriting guides online, all of which seem to provide similar but overall different methods to do this. I'm looking for what's considered best practices to implement these rewrites. From what I understand, the most common method is to implement rewrites in our .htaccess file using mod_rewrite (which will find the old URLs and rewrite them according to the rewrites I implement). One question I can't seem to find a definitive answer to is when I implement the rewrite to remove file extensions/replace underscores with dashes in our URLs, do the webpage file names need to be edited to the new format? From what I understand the webpage file names must remain the same for the rewrites in the .htaccess to work. However, our internal links (including canonical links) must be changed to the new URL format. Can anyone shed light on this? Also, I'm aware that implementing URL rewriting improperly could negatively affect our SERP rankings. If I redirect our old website directory structure to our new structure using this rewrite, are my bases covered in regards to having the proper 301 redirects in place to not affect our rankings negatively? Please offer any advice/reliable guides to handle this properly. Thanks in advance!
Intermediate & Advanced SEO | | TheDude0 -
Does Google Read URL's if they include a # tag? Re: SEO Value of Clean Url's
An ECWID rep stated in regards to an inquiry about how the ECWID url's are not customizable, that "an important thing is that it doesn't matter what these URLs look like, because search engines don't read anything after that # in URLs. " Example http://www.runningboards4less.com/general-motors#!/Classic-Pro-Series-Extruded-2/p/28043025/category=6593891 Basically all of this: #!/Classic-Pro-Series-Extruded-2/p/28043025/category=6593891 That is a snippet out of a conversation where ECWID said that dirty urls don't matter beyond a hashtag... Is that true? I haven't found any rule that Google or other search engines (Google is really the most important) don't index, read, or place value on the part of the url after a # tag.
Intermediate & Advanced SEO | | Atlanta-SMO0 -
Product or Shop in URL
What do you think is better for seo and for sale, I am using woo-ecommerce for health products website. websitename.com/product/keyword OR websitename.com/shop/keyword
Intermediate & Advanced SEO | | MasonBaker0 -
URL mapping for site migration
Hi all! I'm currently working on a migration for a large e-commerce site. The old one has around 2.5k urls, the new one 7.5k. I now need to sort out the redirects from one to the other. This is proving pretty tricky, as the URL structure has changed site wide. There doesn't seem to be any consistent rules either so using regex doesn't really work. By and large, the copy appears to be the same though. Does anybody know of a tool I can crawl the sites with that will export the crawled url and related copy into a spreadsheet? That way I can crawl both sites and compare the copy to match them up. Thanks!
Intermediate & Advanced SEO | | Blink-SEO0 -
Noindex xml RSS feed
Hey, How can I tell search engines not to index my xml RSS feed? The RSS feed is created by Yoast on WordPress. Thanks, Luke.
Intermediate & Advanced SEO | | NoisyLittleMonkey0 -
Ending URLs in .html versus /
Hi there! Currently all the URLs on my website, even the home page, end it .html, such as http://www,consumerbase.com/index.html Is this bad?
Intermediate & Advanced SEO | | Travis-W
Is there any benefit to this? Should I remove it and just have them end with a forward slash?
If I 301 redirect the old .html URLs to the forward slash URLs, will I lose PA? Thanks!0 -
Meta NoIndex tag and Robots Disallow
Hi all, I hope you can spend some time to answer my first of a few questions 🙂 We are running a Magento site - layered/faceted navigation nightmare has created thousands of duplicate URLS! Anyway, during my process to tackle the issue, I disallowed in Robots.txt anything in the querystring that was not a p (allowed this for pagination). After checking some pages in Google, I did a site:www.mydomain.com/specificpage.html and a few duplicates came up along with the original with
Intermediate & Advanced SEO | | bjs2010
"There is no information about this page because it is blocked by robots.txt" So I had added in Meta Noindex, follow on all these duplicates also but I guess it wasnt being read because of Robots.txt. So coming to my question. Did robots.txt block access to these pages? If so, were these already in the index and after disallowing it with robots, Googlebot could not read Meta No index? Does Meta Noindex Follow on pages actually help Googlebot decide to remove these pages from index? I thought Robots would stop and prevent indexation? But I've read this:
"Noindex is a funny thing, it actually doesn’t mean “You can’t index this”, it means “You can’t show this in search results”. Robots.txt disallow means “You can’t index this” but it doesn’t mean “You can’t show it in the search results”. I'm a bit confused about how to use these in both preventing duplicate content in the first place and then helping to address dupe content once it's already in the index. Thanks! B0