Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Sanity Check: NoIndexing a Boatload of URLs
-
Hi,
I'm working with a Shopify site that has about 10x more URLs in Google's index than it really ought to. This equals thousands of urls bloating the index. Shopify makes it super easy to make endless new collections of products, where none of the new collections has any new content... just a new mix of products. Over time, this makes for a ton of duplicate content.
My response, aside from making other new/unique content, is to select some choice collections with KW/topic opportunities in organic and add unique content to those pages. At the same time, noindexing the other 90% of excess collections pages.
The thing is there's evidently no method that I could find of just uploading a list of urls to Shopify to tag noindex. And, it's too time consuming to do this one url at a time, so I wrote a little script to add a noindex tag (not nofollow) to pages that share various identical title tags, since many of them do. This saves some time, but I have to be careful to not inadvertently noindex a page I want to keep.
Here are my questions:
-
Is this what you would do? To me it seems a little crazy that I have to do this by title tag, although faster than one at a time.
-
Would you follow it up with a deindex request (one url at a time) with Google or just let Google figure it out over time?
-
Are there any potential negative side effects from noindexing 90% of what Google is already aware of?
-
Any additional ideas?
Thanks! Best... Mike
-
-
Hi Michael
The problem you have is the very low value content that exists on all of those pages and the complete impossibility of writing any unique Titles, Descriptions and content. There are just too many of them.
With a footwear client of mine I no indexed a huge slug of tags taking the page count down by about 25% - we saw an immediate 22% increase in organic traffic in the first month. (March 18th 2017 - April 17th 2017) the duplicates were all size and colour related. Since canonicalising (I'm English lol) more content and taking the site from 25,000 pages to around 15,000 the site is now 76% ahead of last year for organics. This is real measurable change.
Now the arguments:
Canonicalisation
How are you going to canonicalise 10,000+ pages ? unless you have some kind of magic bullet you are not going to be able to but lets look at the logic.
Say we have a page of Widgets (brand) and they come in 7 sizes. When the range is fully in stock all of the brand/size pages will be identical to the brand page, apart from the title & description. So it would make sense to canonicalise back to the brand. Even when sizes started to run out, all of the sizes will be on the brand page. So size is a subset of the brand page.
Similar but not the same for colour. If colour is a tag then every colour sorted page will be on the brand page. So really they are the same page - just a slimmer selection. Now I accept that the brand page will contain all colours as it did all sizes but the similarity is so great - 95 % of the content being the same apart from the colour, that it makes sense to call them the same.
So for me Canonicalisation would be the way to go but it's just not possible as there are too many of them.
Noindex
The upside of noindex is that it is generally easier to put the noindex tag on the page as there is no URL to tag. The downside is that the page is then not indexed in Google so you lose a little juice - I would argue by the way that the chances of being found in Google for a size page is extremely slim, less than 2% of visits came from size pages before we junked them and most of those were from a newsletter so reality is <1% not worth bothering about You could leave off the nofollow so that Google crawls through all of the links on the pages - the better option.
Considering your problem and having experience of a number of sites with the same problem Noindex is your solution.
I hope that helps
Kind Regards
Nigel - Carousel Projects.
-
Hi Chris & Nigel,
Thank you for the considered responses. Good points about canonicalizing. A part I find frustrating is that the shared title tag across dozens or hundreds of pages will be across many different products/groups of products. So, the title tag is not a solid way to group canonicals.
Since the url patterns vary, I don't see how I could group these by which dozens or hundreds canonicalize to which one page, let alone make the change in Shopify other than one page at a time. My understanding is that this title tag manipulation is the only handle Shopify gives for making these bulk changes.
Gah!
So, here are my follow up questions:
-
How big of a negative is this in it's as-is state and how much better will noindexing most of the 90% make it Google Organic-wise? I ask because even the BS title tag to noindex project is a huge time suck.
-
If more is ever revealed about how to more efficiently group and canonicalize in Shopify, would adding the canonical after noindexing capture that lost authority later or would the previous noindex have irretrievably lost that?
-
Given all that, would you continue as I am?
Thanks! Best... Mike
-
-
Hi Mike
I see this a lot with sites that have a ton of tag groups. One site I am working on has 50,000 pages in Google caused by tags appending themselves to every version of a URL, the site only has 400 products. Example
Site/size-4
Site/womens/size-4
Site/womens/boots/size-4
Site/womens/boots/ankle/size-4
Site/womens/clarks/boots/size-4Etc etc - If there are other tags like colour and features, this can cause a huge 3 dimensional matrix of additional pages that can slow down the crawl of the site - Google may not crawl all of the site as a result.
If it's possible to canonicalse then that is the best option as juice and follows are retained - very often it would be the page with the tag lopped off that the tag should cite.
In extreme circumstances I would consider noindexing the pages as they offer very skinny content and rubbish Meta because it's impossible to handle them individually. I have seen significant improvement in organics as a result.
Personally I don't think it's enough to simply leave Google to figure it out although I have seen some sites with very high DA get away with it.
To be honest I am pretty shocked that Shopify doesn't have a feature to cope with this
Regards
Nigel
Carousel Projects.
-
Hello Michael Johnson and Mozzers,
I have seen Shopify do this a few times, though I do not have clients on that particular platform at the moment. It is frustrating. You're right to want to resolve this issue. Between duplicate content, authority conflicts, and an inflated crawl budget, one issue or another is bound to hold back site performance.
Is this what you would do? Not immediately, no. I want to see those pages canonicalized. That way, your preferred pages get all the juice back from their respective canonical link. Is this an option for you?
**Deindex request... and s_ide effects?**_ Canonical tags would make these part irrelevant (yay less work!). To be thorough though: I'd let Google figure it out unless you have strong evidence your crawl budget is maxed. And I don't see any negative side effects from noindexing duplicate content. If worse comes to worse, you have a good plan.
Shape that content,
CopyChrisSEO and the Vizergy Team
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
Intermediate & Advanced SEO | | Gabriele_Layoutweb0 -
Inactive Products - Inactive URLs
Hi, In our website www.viatrading.com we have many products that might be in stock or not depending on availability. Until now, when a product was not available anymore, we took this page down (and redirected to its product category page). And, only if the product was available again, we re-activated the URL - this might be days, months or even years later. To make this more SEO-friendly, we decided now that while a product is not available, instead or deactivating/redirecting the page, we will leave it online and just add a message saying "This product is currently not available". If we do this, we will automatically re-activate about 500 products pages at once. 1. Just to make sure, is it harmful for SEO to keep activating/deactivating URLs this way? 2. Since most of these pages have been deindexed for a long time due to being redirected - have they lost all their SEO juice? 3. How can we better activate these old 500 pages - is it ok activating them all at once? Thank you,
Intermediate & Advanced SEO | | viatrading11 -
URL Injection Hack - What to do with spammy URLs that keep appearing in Google's index?
A website was hacked (URL injection) but the malicious code has been cleaned up and removed from all pages. However, whenever we run a site:domain.com in Google, we keep finding more spammy URLs from the hack. They all lead to a 404 error page since the hack was cleaned up in the code. We have been using the Google WMT Remove URLs tool to have these spammy URLs removed from Google's index but new URLs keep appearing every day. We looked at the cache dates on these URLs and they are vary in dates but none are recent and most are from a month ago when the initial hack occurred. My question is...should we continue to check the index every day and keep submitting these URLs to be removed manually? Or since they all lead to a 404 page will Google eventually remove these spammy URLs from the index automatically? Thanks in advance Moz community for your feedback.
Intermediate & Advanced SEO | | peteboyd0 -
Removing UpperCase URLs from Indexing
This search - site:www.qjamba.com/online-savings/automotix gives me this result from Google: Automotix online coupons and shopping - Qjamba
Intermediate & Advanced SEO | | friendoffood
https://www.qjamba.com/online-savings/automotix
Online Coupons and Shopping Savings for Automotix. Coupon codes for online discounts on Vehicles & Parts products. and Google tells me there is another one, which is 'very simliar'. When I click to see it I get: Automotix online coupons and shopping - Qjamba
https://www.qjamba.com/online-savings/Automotix
Online Coupons and Shopping Savings for Automotix. Coupon codes for online discounts on Vehicles & Parts products. This is because I recently changed my program to redirect all urls with uppercase in them to lower case, as it appears that all lowercase is strongly recommended. I assume that having 2 indexed urls for the same content dilutes link juice. Can I safely remove all of my UpperCase indexed pages from Google without it affecting the indexing of the lower case urls? And if, so what is the best way -- there are thousands.0 -
What is the best URL structure for categories?
A client's site currently uses the URL structure: www.website.com/�tegory%/%postname% Which I think is optimised fairly well, as the categories are keywords being targeted. However, as they are using a category hierarchy, often times the URL looks like this: www.website.com/parent-category/child-category/some-post-titles-are-quite-long-as-they-are-long-tail-terms Best practise often dictates (such as point 3 in this Moz article) that shorter URLs are better for several reasons. So I'm left with a few options: Remove the category from the URL Flatten the category hierarchy Shorten post titles two a word or two - which would hurt my long tail search term traffic. Leave it as it is What do we think is the best route to take? Thanks in advance!
Intermediate & Advanced SEO | | underscorelive0 -
Accidently added a nofollow, noindex tag and then...
Hey guys, My first post here and ironically highlights a ridiculously stupid mistake! Ok, here's the deal... I started building links to one of my new page on a fairly good, old site (DA = >35). Before starting to build links, I added fresh new content, and while doing that, I accidentally added a "nofollow" and "noindex" tag to the page! Guess what, google DID de-index the page ! So the questions is (and YES, I did change the meta tags): Will google re-index the page with some good linking? Will it treat the page as a new, fresh page even though it was present for over a year? I had already started link building to that page, and now technically the links are pointing to a page that does not exist in the index, so once it does get re-indexed, will Google FLAG it as having too many links? Would I be ranking it as a new page? Will its previous ranking (for very few keywords) will come back? Thanks and Regards, Amod
Intermediate & Advanced SEO | | bonusjonathan0 -
How to deal with old, indexed hashbang URLs?
I inherited a site that used to be in Flash and used hashbang URLs (i.e. www.example.com/#!page-name-here). We're now off of Flash and have a "normal" URL structure that looks something like this: www.example.com/page-name-here Here's the problem: Google still has thousands of the old hashbang (#!) URLs in its index. These URLs still work because the web server doesn't actually read anything that comes after the hash. So, when the web server sees this URL www.example.com/#!page-name-here, it basically renders this page www.example.com/# while keeping the full URL structure intact (www.example.com/#!page-name-here). Hopefully, that makes sense. So, in Google you'll see this URL indexed (www.example.com/#!page-name-here), but if you click it you essentially are taken to our homepage content (even though the URL isn't exactly the canonical homepage URL...which s/b www.example.com/). My big fear here is a duplicate content penalty for our homepage. Essentially, I'm afraid that Google is seeing thousands of versions of our homepage. Even though the hashbang URLs are different, the content (ie. title, meta descrip, page content) is exactly the same for all of them. Obviously, this is a typical SEO no-no. And, I've recently seen the homepage drop like a rock for a search of our brand name which has ranked #1 for months. Now, admittedly we've made a bunch of changes during this whole site migration, but this #! URL problem just bothers me. I think it could be a major cause of our homepage tanking for brand queries. So, why not just 301 redirect all of the #! URLs? Well, the server won't accept traditional 301s for the #! URLs because the # seems to screw everything up (server doesn't acknowledge what comes after the #). I "think" our only option here is to try and add some 301 redirects via Javascript. Yeah, I know that spiders have a love/hate (well, mostly hate) relationship w/ Javascript, but I think that's our only resort.....unless, someone here has a better way? If you've dealt with hashbang URLs before, I'd LOVE to hear your advice on how to deal w/ this issue. Best, -G
Intermediate & Advanced SEO | | Celts180 -
Url with hypen or.co?
Given a choice, for your #1 keyword, would you pick a .com with one or two hypens? (chicago-real-estate.com) or a .co with the full name as the url (chicagorealestate.co)? Is there an accepted best practice regarding hypenated urls and/or decent results regarding the effectiveness of the.co? Thank you in advance!
Intermediate & Advanced SEO | | joechicago0