Should pages of old news articles be indexed?
-
My website published about 3 news articles a day and is set up so that old news articles can be accessed through a "back" button with articles going to page 2 then page 3 then page 4, etc... as new articles push them down. The pages include a link to the article and a short snippet.
I was thinking I would want Google to index the first 3 pages of articles, but after that the pages are not worthwhile. Could these pages harm me and should they be noindexed and/or added as a canonical URL to the main news page - or is leaving them as is fine because they are so deep into the site that Google won't see them, but I also won't be penalized for having week content?
Thanks for the help!
-
Ah I'm sorry I misinterpreted you - so it's essentially about pagination? Rel Next/Rel Previous is probably the best way to go - the first page will be given the equity and the pages won't have to compete with each other for ranking. Google have a pretty comprehensive guide: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1663744
-
Thanks Alice, but my question is about the page where the article is linked from not the actual article itself ( which 100% is staying indexed )
-
Hi Sara,
If the articles are time sensitive but high quality, I wouldn't noindex them. They could still have value in the future (for example, if a related story comes up, you can link back to the old article). You might also find ways to refresh or recycle them, such as adding a follow up, updating the information, or promoting a really great post "From Our Archives". They could also be a good longtail source of traffic for people looking for information on past news/events.
Google will be able to index old and outdated articles, but it's smart enough to know that these posts are old and outdated and therefore won't assign big chunks of page rank to them.
However if the articles are low quality, I would take action to improve the good content/poor content ratio. The ideal situation would be to improve the articles themselves, but that might not be a feasible solution if you've been publishing three per day for an extended period of time. I would conduct a thorough audit to see what content could be saved/improved and what content should be deleted. I wouldn't bother with no index or canonicals - if it's good content leave it up and let it be indexed, and if it's bad content that can't be saved, remove it.
Finally if you are redirecting old articles, I would be careful about where they redirect to. Ideally you'd want to redirect from a low quality article to a high quality article on the same subject. A big increase in URLs pointing to the main news page could raise a red flag, and could force readers to look for information unnecessarily.
Good luck!
-
The news articles themselves are not thin content, but the general pages are relatively thin because they only consist of the link + snippet.
-
Are they all thin content? If not, then I don't think it's necessary to NOINDEX them. If you think some of them don't have any real value, you could specifically NOINDEX them(and not all together). Google will crawl those pages no matter how deep they are, as long as they are accessible.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Insane traffic loss and indexed pages after June Core Update, what can i do to bring it back?
Hello Everybody! After June Core Update was released, we saw an insane drop on traffic/revenue and indexed pages on GSC (Image attached below) The biggest problem here was: Our pages that were out of the index were shown as "Blocked by robots.txt", and when we run the "fetch as Google" tool, it says "Crawl Anomaly". Even though, our robots.txt it's completely clean (Without any disallow's or noindex rules), so I strongly believe that the reason that this pattern of error is showing, is because of the June Core Update. I've come up with some solutions, but none of them seems to work: 1- Add hreflang on the domain: We have other sites in other countries, and ours seems like it's the only one without this tag. The June update was primarily made to minimize two SERP results per domain (or more if google thinks it's relevant). Maybe other sites have "taken our spot" on the SERPS, our domain is considerably newer in comparison to the other countries. 2- Mannualy index all the important pages that were lost The idea was to renew the content on the page (title, meta description, paragraphs and so on) and use the manual GSC index tool. But none of that seems to work as well, all it says is "Crawl Anomaly". 3- Create a new domain If nothing works, this should. We would be looking for a new domain name and treat it as a whole new site. (But frankly, it should be some other way out, this is for an EXTREME case and if nobody could help us. ) I'm open for ideas, and as the days have gone by, our organic revenue and traffic doesn't seem like it's coming up again. I'm Desperate for a solution Any Ideas gCi46YE
Intermediate & Advanced SEO | | muriloacct0 -
If a page ranks in the wrong country and is redirected, does that problem pass to the new page?
Hi guys, I'm having a weird problem: A new multilingual site was launched about 2 months ago. It has correct hreflang tags and Geo targetting in GSC for every language version. We redirected some relevant pages (with good PA) from another website of our client's. It turned out that the pages were not ranking in the correct country markets (for example, the en-gb page ranking in the USA). The pages from our site seem to have the same problem. Do you think they inherited it due to the redirects? Is it possible that Google will sort things out over some time, given the fact that the new pages have correct hreflangs? Is there stuff we could do to help ranking in the correct country markets?
Intermediate & Advanced SEO | | ParisChildress1 -
Preserving link equity from old pages
Hi Moz Community, We have a lot of old pages built with Dreamweaver a long time ago (2003-2010) which sit outside our current content management system. As you'd expect they are causing a lot of trouble with SEO (Non-responsive, duplicate titles and various other issues). However, some of these older pages have very good backlinks. We were wondering what is the best way to get rid of the old pages without losing link equity? In an ideal world we would want to bring over all these old pages to our CMS, but this isn't possible due to the amount of pages (~20,000 pages) and cost involved. One option is obviously to bulk 301 redirect all these old pages to our homepage, but from what we understand that may not lead to the link equity being passed down optimally by Google (or none being passed at all). Another option we can think of would be to bring over the old articles with the highest value links onto the current CMS and 301 redirect the rest to the homepage. Any advice/thoughts will be greatly appreciated. Thumbs up! Thanks,
Intermediate & Advanced SEO | | 3gcouk0 -
Our client's web property recently switched over to secure pages (https) however there non secure pages (http) are still being indexed in Google. Should we request in GWMT to have the non secure pages deindexed?
Our client recently switched over to https via new SSL. They have also implemented rel canonicals for most of their internal webpages (that point to the https). However many of their non secure webpages are still being indexed by Google. We have access to their GWMT for both the secure and non secure pages.
Intermediate & Advanced SEO | | RosemaryB
Should we just let Google figure out what to do with the non secure pages? We would like to setup 301 redirects from the old non secure pages to the new secure pages, but were not sure if this is going to happen. We thought about requesting in GWMT for Google to remove the non secure pages. However we felt this was pretty drastic. Any recommendations would be much appreciated.0 -
Thinking about not indexing PDFs on a product page
Our product pages generate a PDF version of the page in a different layout. This is done for 2 reasons, it's been the standard across similar industries and to help customers print them when working with the product. So there is a use when it comes to the customer but search? I've thought about this a lot and my thinking is why index the PDF at all? Only allow the HTML page to be indexed. The PDF files are in a subdomain, so I can easily no index them. The way I see it, I'm reducing duplicate content On the flip side, it is hosted in a subdomain, so the PDF appearing when a HTML page doesn't, is another way of gaining real estate. If it appears with the HTML page, more estate coverage. Anyone else done this? My knowledge tells me this could be a good thing, might even iron out any backlinks from being generated to the PDF and lead to more HTML backlinks Can PDFs solely exist as a form of data accessible once on the page and not relevant to search engines. I find them a bane when they are on a subdomain.
Intermediate & Advanced SEO | | Bio-RadAbs0 -
Drop in indexed pages!
Hi everybody! I've been working on http://thewilddeckcompany.co.uk/ for a little while now. Until recently, everything was great - good rankings for the key terms of 'bird hides' and 'pond dipping platforms'. However, rankings have tanked over the past few days. I can't point my finger at it yet, but a site:thewilddeckcompany.co.uk search shows only three pages have been indexed. There's only 10 on the site, and it was fine beforehand. Any advice would be much appreciated,
Intermediate & Advanced SEO | | Blink-SEO0 -
What do you do with outdated news and articles?
What do you guys do with your old content/news/articles? Do you just leave them on your site forever for historical reasons? It goes without saying that you wouldn't delete an article that has links pointing to it. But if there aren't any links, it doesn't rank and it doesn't receive traffic… do you just scrap it? How say you? Update: I would also like to throw in that I have a client who in 2006/2007 used content from another site. What would you do with that content after this amount of time? Bother with it?
Intermediate & Advanced SEO | | BeTheBoss0 -
Sitemap not indexing pages
My website has about 5000 pages submitted in the sitemap but only 900 being indexed. When I checked Google Webmaster Tools about a week ago 4500 pages were being indexed. Any suggestions about what happened or how to fix it? Thanks!
Intermediate & Advanced SEO | | theLotter0