Noindex vs. page removal - Panda recovery
-
I'm wondering whether there is a consensus within the SEO community as to whether noindexing pages vs. actually removing pages is different from Google Pandas perspective?Does noindexing pages have less value when removing poor quality content than physically removing ie. either 301ing or 404ing the page being removed and removing the links to it from the site?
I presume that removing pages has a positive impact on the amount of link juice that gets to some of the remaining pages deeper into the site, but I also presume this doesn't have any direct impact on the Panda algorithm?
Thanks very much in advance for your thoughts, and corrections on my assumptions
-
I think it can get pretty complicated, but a couple of observations:
(1) In my experience, NOINDEX does work - indexation is what Google cares about primarily. Eventually, you do need to trim the crawl paths, XML sitemaps, etc., but often it's best to wait until the content is de-indexed.
(2) From an SEO perspective (temporarily ignoring Panda), a 301 consolidates link juice - so, if a page has incoming links or traffic, that's generally the best way to go. If the page really has no value at all for search, either a 404 or NOINDEX should be ok (strictly from an SEO perspective). If the page is part of a path, then NOINDEX,FOLLOW could preserve the flow of link juice, whereas a 404 might cut it off (not to that page, but to the rest of the site and deeper pages).
(3) From a user perspective, 301, 404, and NOINDEX are very different. A 301 is a good alternative to pass someone to a more relevant or more current page (and replace an expired one), for example. If the page really has no value at all, then I think a 404 is better than NOINDEX, just in principle. A NOINDEX leaves the page lingering around, and sometimes it's better to trim your content completely.
So, the trick is balancing (2) and (3), and that's often not a one-sized fits all solution. In other words, some groups of pages may have different needs than others.
-
Agreed - my experience is that NOINDEX definitely can have a positive impact on index dilution and even Panda-level problems. Google is mostly interested in index removal.
Of course, you still need to fix internal link structures that might be causing bad URLs to roll out. Even a 404 doesn't remove a crawl path, and tons of them can cause crawler fatigue.
-
I disagree with everyone The reason panda hit you is because you were ranking for low quality pages you were telling Google wanted them to index and rank.
When you
a) remove them from sitemap.xmls
b) block them in robots.txt
c) noindex,follow or noindex, nofollow them in metas
you are removing them from Googles index and from the equation of good quality vs low quality pages indexed on your site.
That is good enough. You can still have them return a 200 and be live on your site AND be included in your user navigation.
One example is user generated pages when users signup and get their own URL www.mysite.com/tom-jones for example.Those pages can be live but should not be indexed because they have no content usually other than a name.
As long as you are telling Google - don't index them I don't want them to be considered in the equation of pages to show up in the index, you are fine with keeping these pages live!
-
Thanks guys
-
I would agree noindex is not as good as removing the content but it still can work as long as there are no links or sitemaps that lead Google back to the low quality content.
I worked on a site that was badly affected by Panda in 2011. I had some success by noindexing genuine duplicates (pages that looked really alike but did need to be there) and removing low quality pages that were old and archived. I was left with about 60 genuine pages that needed to be indexed and rank well so I had to pay a copywriter to rewrite all those pages (originally we had the same affiliate copy on there as lots of other sites). That took about 3 months for Google to lift or at least reduce the penalty and our rankings to return to the top 10.
Tom is right that just noindexing is not enough. If pages are low quality or duplicates then keep them out of sitemaps and navigation so you don't link to them either. You'll also nned redirects in case anyone else links to them. In my experience, eventually Google will drop them from the index but it doesn't happen overnight.
Good luck!
-
Thanks Tom
Understand your points. The idea behind noindexing is that you're telling Google not to take any notice of the page.
I guess the question is whether that works:
- Not at all
- A little bit
- A lot
- Is as good as removing the content
I believe it's definitely not as good as actually removing the content, but not sure about the other three possibilities.
We did notice that we got a small improvement in placement when we noindexed a large amount of the site and took several hundred other pages actually down. Hard to say which of those two things caused the improvement.
We've heard of it working for others, which is why I'm asking...
Appreciate your quick response
Phil
-
I don't see how noindexing pages would help with regards to a Panda recovery if you're already penalised.
Once the penalty is in place, my understanding is that it will remain so until all offending pages have been removed or changed to unique content. Therefore, noindexing would not work - particularly if that page is accessible via an HTML/XML sitemap or a site navigation system. Even then, I would presume that Google will have the URL logged and if it remained as is, any penalty removable would not be forthcoming.
Noindexing pages that has duplicate content but hasn't been penalised yet would probably prevent (or rather postpone) any penalty - although I'd still rather avoid the issue outright where possible. Once a penalty is in place, however, I'm pretty sure it will remain until removed, even if noindexed.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
vs.
I have a site that is based in the US but each page has several different versions for different regions. These versions live in folders (/en-us for the US English version, /en-gb for the UK English version, /fr-fr for the French version, etc.). Obviously, the French pages are in French. However, there are two versions of the site that are in English with little variation of the content. The pages all have a tag to indicate the language the page is in. However, there are no <hreflang>tags to indicate that the pages are the same page in two different languages.</hreflang> My question is, do I need to go through and add the <hreflang>tags to each page to reference each other and identify to Google that these are duplicate content issues, but different language versions of the same content? Or, will Google figure that our from the tag?</hreflang>
Technical SEO | | InterCall0 -
Specific vs. Home Page Backlinks
So, I'm getting ready to start a campaign to get some backlinks. Pretty sure this is a silly NOOB question, but what is better: To get backlinks directed to my home page To get backlinks directed to she specific product/topic being discussed in the backlink. Thanks in advance for any help. My GUESS on the whole topic is that linking to a specific product page from a backlink with anchor text is best practice. It will boost the Page Authority of that page while boosting the overall Domain Authority . . . a win win.
Technical SEO | | damon12121 -
Google Bot Noindex
If a site has the tag, can it still be flagged for duplicate content?
Technical SEO | | MayflyInternet0 -
Pages Indexed Not Changing
I have several sites that I do SEO for that are having a common problem. I have submitted xml sitemaps to Google for each site, and as new pages are added to the site, they are added to the xml sitemap. To make sure new pages are being indexed, I check the number of pages that have been indexed vs. the number of pages submitted by the xml sitemap every week. For weeks now, the number of pages submitted has increased, but the number of pages actually indexed has not changed. I have done searches on Google for the new pages and they are always added to the index, but the number of indexed pages is still not changing. My initial thought was as new pages are added to the index, old ones are being dropped. But I can't find evidence of that, or understand why that would be the case. Any ideas on why this is happening? Or am I worrying about something that I shouldn't even be concerned with since new pages are being indexed?
Technical SEO | | ang1 -
Need Help With WWW vs. Non-WWW Duplicate Pages
A friend I'm working with at RedChairMarket.com is having duplicate page issues. Among them, both www and non-www URLs are being generated automatically by his software framework, ASP.net mvc 3. How should we go about finding and tackling these duplicates? Thanks!
Technical SEO | | BrittanyHighland0 -
How to make my good sub-page rank ahead of my generic home page?
I have an ecommerce site for the clothes drying racks my family business makes, and it sells a few other laundry items also. It's about 5 years old. We used to rank on the first page for basic phrases like "clothes drying rack" and "umbrella clothesline". About 1.5 years ago we fell hard in the rankings. Since then "umbrella clothesline" has moved back to the first page, but "clothes drying rack" is stuck on the 3rd page and always with the result being the generic homepage instead of the good sub-page (which used to rank on the first page) that really shows-n-tells about our drying rack. Here are the three pages I am talking about. Home page = http://www.bestdryingrack.com/ Drying rack page = http://www.bestdryingrack.com/clothes-drying-rack-main.html and umbrella clothesline page = http://www.bestdryingrack.com/umbrella-clotheslines.html Any ideas on how to get the drying rack page to start ranking well again? (hopefully better than the generic homepage ranks) A little technical background: the Moz campaign on this site says that the home page has a PA = 42 with 190 LRD's and 344 external links. Both the umbrella clothesline page and the clothes drying rack page have almost equal statistics of PA = 35 with 20 LRD's and 23 external links. My anchor text distribution is maybe unbalanced. The drying rack page has 15 external links with the anchor of "Clothes Drying Rack". But the umbrella clothesline page has 14 external links with the anchor of "outdoor umbrella clothesline" and it ranks on the first page for that search. I can't figure out how to get OSE to tell me anchor text stats for just the homepage and not the whole site since www.bestdryingrack.com/index.html 301's to the plain www.bestdryingrack.com (if you know how, please share) What's wrong with my poor neglected clothes drying rack page? The only way I can get it to show up on the first page is to do a real specific search like "round wooden clothes drying rack" Your help could save a faltering family business. Thank you!
Technical SEO | | GregB1230 -
Similar pages: noindex or rel:canonical or disregard parameters?!
Hey all! We have a hotel booking website that has search results pages per destinations (e.g. hotels in NYC is dayguest.com/nyc). Pages are also generated for destinations depending on various parameters, that can be star rating, amenities, style of the properties, etc. (e.g. dayguest.com/nyc/4stars, dayguest.com/nyc/luggagestorage, dayguest.com/nyc/luxury, etc.). In general, all of these pages are very similar, as for example, there might be 10 hotels in NYC and all of them will offer luggage storage. Pages can be nearly identical. Come the problems of duplicate content and loss of juice by dilution. I was wondering what was the best practice in such a situation: should I just put all pages except the most important ones (e.g. dayguest.com/nyc) as noindex? Or set it as canonical page for all variations? Or in google webmaster tool ask google to disregard the URLs for various parameters? Or do something else altogether?! Thanks for the help!
Technical SEO | | Philoups0 -
No. of links on a page
Is it true that If there is a huge number of links from the source page then each link will provide very little value in terms of passing link juice ?
Technical SEO | | seoug_20050