How to noindex lots of content properly : bluntly or progressively ?
-
Hello Mozers !
I'm quite in doubt, so I thought why not ask for help ?
Here's my problem : I need to better up a website's SEO that consists in a lot (1 million+ pages) of poor content. Basically it's like a catalog, where you select a brand, a product series and then the product, to then fill out a form for a request (sorry for the cryptic description, I can't be more precise).
Beside the classic SEO work, a part of what (I think) I need to do is noindex some useless pages and rewrite important ones with great content, but for the noindexing part I'm quite hesitant on the how.
There's like 200 000 pages with no visits since a year, so I guess they're pretty much useless junk that would be better off in noindex. But the webmaster is afraid that noindexing that much pages will hurt its long tail (in case of future visits), so he wants to check the SERP position of every one of them, to only eliminate those that are in the top 3 (for these there's no hope of amelioration he thinks). I think it would be wasting a lot of time and resources for nothing, and I'd advise to noindex them regardless of their position.
The problem is I lack the experience to be sure of it and how to do it : Is it wise to noindex 200 000 pages bluntly in one time (isn't it a bad signal for google ?) or should we do this progressively in like a few months ?
Thanks a lot for your help !
Johann.
-
Sorry you're stuck in that spot. I really would be worried that this "fix" would make life worse for everyone, but it's tough to come up with solutions that don't seem like band-aids. Best you may be able to do is get more aggressive about the de-indexation, focus on improving some core content, and maybe re-work the internal linking to focus more on key pages (and spread internal PR a bit less thinly).
-
Yeah, I get what you're saying and totally agree, since a radical overhaul is what I recommended from the start, but only got a no-can-do response... until now. But their "yes" is more like :
-
Ok, rebuild our website entirely, just don't touch our website.
-
Errr what ?
Anyway, so a similar domain name and brand was in fact a bad idea.
Thanks a lot for your input (and your awesome moz posts !
Cheers,
Johann.
-
-
Given their history, two domains with overlapping content and a similar name seems like a terrible idea to me, to be blunt. If this really is a Panda issue, then you're potentially going to aggravate the situation and send out even more low quality signals.
It's hard to speculate, but I've seen a few situations where what seemed like Panda turned out to be something deeper. Directory clients have been hit hard, for example, as Google just seems to be devaluing the entire space (along with price comparison sites, many types of affiliates, etc.). I'm not talking about spammy sites, even, but the ones that provide some original value. It's just that Google doesn't see them as the end-supplier, and so they're getting discounted.
An end-run to a new domain isn't going to fix this. I strongly suspect that you've got something deeper going on that may take a radical overhaul of the main site and even the business/brand. I think it's better to accept that now than continue a gradual decline over the next couple of years.
-
Hi everyone,
Some news on this story that may (or may not) be of interest for some (even if I can't give the domain name), and a new question (I may also start another discussion for that one) :
-
The website has lost a significant amount of trafic over the passing year, even with the massive noindexing of 200 000 pages (I finally convinced him to do it, but it clearly wasn't enough). About a 40% loss gradually with some panda updates (dates coincide nicely).
-
We've worked hard on it to offer a new section of interesting content (not a blog but nearly) that presented interesting original statistics on the niche with visual presentations, and a bunch of related content, about a hundred pages total. It's like a drop in the ocean, but it gained a bit of popularity, some nice links and good branding. I think it's probably the reason why the website is still standing, it even made a few top positions on new important keywords.
-
Last but not least, we've improved the user experience and bumped up our conversion rates so the loss in trafic is partly compensated by the gains in conversion (not completely though).
It still drags nearly a million pages of thin content, and still takes a little hit with every Panda roll-out... So no recovery, but a controled descent, as it's still alive.
Now I got the green light to a complete do-over, starting a rebuild with a completely new (lighter) structure and a new design. We're pumped full of ideas of great content and user experience, so it's gonna be a fresh new start. BUT, (there's always a but), the webmaster wants to keep the old website while it's still alive and I wonder if we can take a similar domain name to capitalize on the brand popularity. Like www.brand-domain.com instead of www.branddomain.com (in case it's not clear, we'll take the same domain name with a dash in it, so the brand stays recognizable).Is it gonna look manipulative for Google to have two websites with nearly the same domain name, the exact same brand, the same service (so the same keywords targeted) ? Any other caveats ?
(I know they are going to compete with each other, but they'll have different contents, and it would be temporary : as soon as the new one reaches the first one's popularity, we'll prepare a proper redirect - could be a month, could be a year later)Thanks for any input ! I'll wait before trying to start a new discussion to avoid any clutter^^
Johann
-
-
Thanks a lot for your insight dr pete
I'll sell the large cut sooner or later by convincing him. It's either that or I use a time machine to show his future stats when Google release the next Panda tweaks ^^
Option 1 is easier after all !
-
I wish I could convince people that more DOES NOT EQUAL better when it comes to index size. You'd think Panda would've been the nail in that coffin, but too many webmasters are still operating in 2005.
-
I've never seen an issue where a large-scale META NOINDEX caused Google to get suspicious. It's possible to NOINDEX the wrong pages and lose traffic, but Google generally doesn't get jumpy about it like they would a large scale 301-redirect (where you might be PR-sculpting).
If these are really duplicates, canonical tags might be a better bet. Honestly, while I agree with Stephen 99.9%, if there's no glaring current issue, you could ease into it. Start with the worst culprits - obvious, 100% duplicates. That should be an easier sell, too. If you can't sell the larger cut, it's not going to matter.
-
Damn, even by saying pages that don't generate traffic now won't much more in the future, and by giving an educated estimation of 0.05% potential future gains by keeping them versus the boatload of progress it could mean for the website to noindex them, it couldn't convince the webmaster to cut them out of the index...
Anyway thanks for your help everyone !
-
noindex asap
thumbs up for this
its not going to suddenly appear out of nowhere
ha ha... for sure!
-
Can you change the structure of the site and perhaps see this as an opportunity...
(granted lots of work required)
Adding another level of Sub categories to separate the content further and allow better indexing ?
-
If you use robots, it will not be able to read the follow tag, what i was suggesting is dont use robots but use meta tage "no-index,follow" to allow link juice to flow even though they are not indexed.
Search engines can still follow links of pages not indexxed, but a robots tells them they are not allowed to crawl the page.
-
Thanks for your replies.
Well, I'm not asking whether I should noindex those pages, I'm pretty sure I have to.
It's just that, noindex brutally one fifth of a website in one time would seem potentially suspect for the search engines... So I wonder if I should very carefully choose which ones to noindex and which ones to keep indexed even among unvisited pages, like the webmaster suggests, or do it slowly over a long period of time.
It's a big decision, I'm appealing to your professional experience to prevent me from making a potential mistake.
@AWCthreads : For the case of an e-commerce website, your suggestion would seem reasonable, because a robots.txt won't keep the pages out of the index if there's links to them, but would reduce the quantity of duplicate content. But in my case, it would not be enough, so the noindex meta tag is my only option it seems.
@Stephen : you're right, traffic can't appear out of thin air for these pages. Even if some of those should begin to see visits, they would still add up to a negligible part I believe. But I don't have the experience to support it or the numbers to prove it.
@Alan Mosley : I'll sure add the follow tag on these pages even if they're not indexed any more, it'll still be valuable. And I guess maybe it would prevent it from appearing too suspicious for the engines, wouldn't it ?
-
First remember that all pages in the index have PageRank and you should use that link juice to your advantage
http://perthseocompany.com.au/seo/tutorials/a-simple-explanation-of-pagerank
Blocking in robots is clumsy, you will have links pointing to pages that are not in the index poring link juice to nowhere. You can add a meta “noindex, follow” tag that will allow link juice to flow in and out of the pages.If the pages are duplicates then I would remove them and fix the broken links it causes.
-
remove from sitemap, noindex asap. he has no longtail from those pages, its not going to suddenly appear out of nowhere
-
Hi Johann. Excellent question and a source of dispute for some people. I've not done it, but many people who want to no-index a large volume of pages will create a directory and put those files in the directory and then put a robots.txt on the directory.
Some people would argue why you would want to put a bunch pages (product pages on an ecommerce site) in a no-index file as they will not be seen/shared/sold etc. Well, my response to that would be to prevent juice dillution on pages of little SEO value and help keep the juice directed at the 20-30% of the products that are making you the most money.
I'm curious what others have to say about this and hope people weigh in on it.
-
Yeah they are mostly duplicates (only about 10% difference in text with variations)...
But near 80% of the pages are indexed, probably because the website has a strong authority and a lot of visits : these are useful pages for people, just not useful to read^^. That's why I'm so hesitant to noindex that much content, even if the website HAS to improve its quality content ratio if it wants to stay for the long run.
Maybe I'll start with testing your sitemap idea. Thanks for the suggestion.
-
Are the pages mostly duplicate content? Do you know how many have been indexed?
If it's a lot, then yes, noindexing them will make it look like your site has dropped a ton of content. But if it's duplicate then I'd go for it anyway as it will probably help things.
Alternatively, how about removing them from the sitemap instead? They may still get found but at least you're giving them a clue that those pages don't matter to you.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Content on desktop and mobile
My website hasn't using responsive design and separate domain for mobile optimize, But we using dynamic serving for mobile version, Read more about dynamic serving here So our website must different design for both version, And then what would be happen in term of SEO if our website hasn't show the same content as desktop but still align with the main content, Such as Desktop has longer content compare to mobile version or Desktop has long H1 but mobile is shorter than. What should we do for this case and how to tell Google Bot.
Technical SEO | | ASKHANUMANTHAILAND0 -
Duplicate content on report
Hi, I just had my Moz Campaign scan 10K pages out of which 2K were duplicate content and URL's are http://www.Somesite.com/modal/register?destination=question%2F37201 http://www.Somesite.com/modal/register?destination=question%2F37490 And the title for all 2K is "Register" How can i deal with this as all my pages have the register link and login and when done it comes back to the same page where we left and that it actually not duplicate but we need to deal with it propely thanks
Technical SEO | | mtthompsons0 -
Send noindex, noarchive with 410?
My classifieds site returns a 410 along with an X-Robots-Tag HTTP header set to "noindex,noarchive" for vehicles that are no longer for sale. Google, however, apparently refuses to drop these vehicles from their index (at least as reported in GWT). By returning a "noindex,noarchive" directive, am I effectively telling the bots "yeah, this is a 410 but don't record the fact that this is a 410", thus effectively canceling out the intended effect of the 410?
Technical SEO | | tonyperez0 -
URL query considered duplicate content?
I have a Magento site. In order to reduce duplicate content for products of the same style but with different colours I have combined them on to 1 product page. I would like to allow the pictures to be dynamic, i.e. allow a user to search for a colour and all the products that offer that colour appear in the results, but I dont want the default product image shown but the product image for that colour applying to the query. Therefore to do this I have to append a query string to the end of the URL to produce this result: www.website.com/category/product-name.html?=red My question is, will the query variations then be picked up as duplicate content: www.website.com/category/product-name.html www.website.com/category/product-name.html?=red www.website.com/category/product-name.html?=yellow Google suggest it has contingencies in its algorithm and I will not be penalised: http://googlewebmastercentral.blogspot.co.uk/2007/09/google-duplicate-content-caused-by-url.html But other sources suggest this is not accurate. Note the article was written in 2007.
Technical SEO | | BlazeSunglass0 -
Lots of duplicate content warnings
I have a site that says that I have 2,500 warnings. It is a real estate website and of course we use feeds. it says I have a lot of duplicate content. One thing is a page called "Request an appointment" and that is a url for each listing. Since there are 800 listings on my site. How could I solve this problem so that this doesn't show up as duplicate content since I use the same "Request an Appointment" verbeage on each of those? I guess my developer who used php to do it, created a dedicated url to each. Any help would be greatly appreciated.
Technical SEO | | SeaC0 -
Do dropdowns count as unique content?
My current site has some extensive unique database content by "widget" type. Currently we display this info into HTML 's, but we are considering utilizing this data in a dropdown field on each respective widget page. I want to ensure we don't have thin content...Does the content within the <option>tags on a dropdown count towards unique content?</option>
Technical SEO | | TheDude0 -
Is this dangerous (a content question)
Hi I am building a new shop with unique products but I also want to offer tips and articles on the same topic as the products (fishing). I think if was to add the articles and advice one piece at a time it would look very empty and give little reason to come back very often. The plan, therefore, is to launch the site pulling articles from a number of article websites - with the site's permission. Obviously this would be 100% duplicate content but it would make the user experience much better and offer added value to my site as people are likely to keep returning even when not in the mood to purchase anything; it also offers the potential for people to email links to friends etc. note: over time we will be adding more unique content and slowly turning off the pulled articled. Anyway, from an seo point of view I know the duplicate content would harm the site but if I was to tell google not to index the directory and block it from even crawling the directory would it still know there is duplicate content on the site and apply the penalty to the non duplicate pages? I'm guessing no but always worth a second opinion. Thanks Carl
Technical SEO | | Grumpy_Carl0 -
Which pages to "noindex"
I have read through the many articles regarding the use of Meta Noindex, but what I haven't been able to find is a clear explanation of when, why or what to use this on. I'm thinking that it would be appropriate to use it on: legal pages such as privacy policy and terms of use
Technical SEO | | mmaes
search results page
blog archive and category pages Thanks for any insight of this.0