Thin Content Pages: Adding more content really help?
-
Hello all,
So I have a website that was hit hard by Panda back in 2012 November, and ever since the traffic continues to die week by week. The site doesnt have any major moz errors (aside from too many on page links).
The site has about 2,700 articles and the text to html ratio is about 14.38%, so clearly we need more text in our articles and we need to relax a little on the number of pictures/links we add.
We have increased the text to html ratio for all of our new articles that we put out, but I was wondering how beneficial it is to go back and add more text content to the 2,700 old articles that we have just sitting.
Would this really be worth the time and investment? Could this help the drastic decline in traffic and maybe even help it grow?
-
Just saying what we did...
We had a site that was hit by panda. We had lots of very short news blurbs and some republished content from government agencies and academic institutions - much of that done by their request for exposure to our visitors. Immediately after the hit, we noindex/followed or deleted/redirected the republished content. We also noindex/followed or deleted all of the short content. The site got out of panda a few weeks later. Some traffic loss but not substantial
As for improving short content. We have done a lot of that. We had lots of very short descriptions of two sentences plus one or two images that were getting nice amounts of traffic. We improved those to a few hundred words and two or three images (very time consuming, very expensive - a few hours per page. The rankings for short tail queries went up nicely and there was a huge increase in long tail traffic. We later started improving the few hundred words plus two or three images to one to two thousand words plus four to eight images - even more time consuming - a day or two per page. Again, rankings and traffic go up nicely.
Today, for each new article that I publish, I am making a huge improvement to a page that is a proven traffic getter but could be improved a lot.
For you, take a look at the traffic into those 2700 old articles prior to your panda problem. Some might not be worth much, but others might be golden. Then decide what to delete/redirect, what to noindex/follow, and what to improve. Then begin working.
Good luck.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz spam score 16 for some pages - Never a manual penalty: Disavow needed?
Hi community, We have some top hierarchy pages with spam score 16 as per Moz due to the backlinks with very high spam score. I read that we could ignore as long as we are not employing paid links or never got a manual penalty. Still we wanna give a try by disavowing certain domains to check if this helps. Anyway we are not going to loose any backlink score by rejecting this low-quality backlinks. Can we proceed? Thanks
White Hat / Black Hat SEO | | vtmoz0 -
Question regarding subdomains and duplicate content
Hey everyone, I have another question regarding duplicate content. We are planning on launching a new sector in our industry to satisfy a niche. Our main site works as a directory with listings with NAP. The new sector that we are launching will be taking all of the content on the main site and duplicating it on a subdomain for the new sector. We still want the subdomain to rank organically, but I'm having struggles between putting a rel=canonical back to main site, or doing a self-referencing canonical, but now I have duplicates. The other idea is to rewrite the content on each listing so that the menu items are still the same, but the listing description is different. Do you think this would be enough differentiating content that it won't be seen as a duplicate? Obviously make this to be part of the main site is the best option, but we can't do that unfortunately. Last question, what are the advantages or disadvantages of doing a subdomain?
White Hat / Black Hat SEO | | imjonny0 -
Should I submit a sitemap for a site with dynamic pages?
I have a coupon website (http://couponeasy.com)
White Hat / Black Hat SEO | | shopperlocal_DM
Being a coupon website, my content is always keeps changing (as new coupons are added and expired deals are removed) automatically. I wish to create a sitemap but I realised that there is not much point in creating a sitemap for all pages as they will be removed sooner or later and/or are canonical. I have about 8-9 pages which are static and hence I can include them in sitemap. Now the question is.... If I create the sitemap for these 9 pages and submit it to google webmaster, will the google crawlers stop indexing other pages? NOTE: I need to create the sitemap for getting expanded sitelinks. http://couponeasy.com/0 -
HELP: What happened to my rankings? No warning from google how to know if i was penalised?
Hi Guys I have just completely a site re-design, I have 3 top level domains. I have no idea whats causing the drop in ranking. I have changed the title tags and meta tags to improve them and make them better, as the last ones weren't really doing us justice. But I see now it has actually dropped our main keyword. I read somewhere that i had to completed **site search **to check and I don't see our home page showing. I was ranking for the keyword: "online psychics" for over 4months at #6 and now is not showing anywhere in the top 50 keywords. I'm also affraid I can not find our other keyword "online psychic readings" which we were ranked #11 seems to have dropped to #44 I have no idea why this would be the case. Our new home page shows a better user experience and also added more content, unqiue content at that - our last design was content thin so I have no idea why we have dropped so much in rankings. The site is also new about 6months new. I have checked WMT and have not received any warnings of any penalties as such, unless it is still coming? Does anyone have any suggestions here? Cheers
White Hat / Black Hat SEO | | edward-may1 -
Page not being indexed or crawled and no idea why!
Hi everyone, There are a few pages on our website that aren't being indexed right now on Google and I'm not quite sure why. A little background: We are an IT training and management training company and we have locations/classrooms around the US. To better our search rankings and overall visibility, we made some changes to the on page content, URL structure, etc. Let's take our Washington DC location for example. The old address was: http://www2.learningtree.com/htfu/location.aspx?id=uswd44 And the new one is: http://www2.learningtree.com/htfu/uswd44/reston/it-and-management-training All of the SEO changes aren't live yet, so just bear with me. My question really regards why the first URL is still being indexed and crawled and showing fine in the search results and the second one (which we want to show) is not. Changes have been live for around a month now - plenty of time to at least be indexed. In fact, we don't want the first URL to be showing anymore, we'd like the second URL type to be showing across the board. Also, when I type into Google site:http://www2.learningtree.com/htfu/uswd44/reston/it-and-management-training I'm getting a message that Google can't read the page because of the robots.txt file. But, we have no robots.txt file. I've been told by our web guys that the two pages are exactly the same. I was also told that we've put in an order to have all those old links 301 redirected to the new ones. But still, I'm perplexed as to why these pages are not being indexed or crawled - even manually submitted it into Webmaster tools. So, why is Google still recognizing the old URLs and why are they still showing in the index/search results? And, why is Google saying "A description for this result is not available because of this site's robots.txt" Thanks in advance! Pedram
White Hat / Black Hat SEO | | CSawatzky0 -
My ranking dropped 3 pages on 18 november 2012
Hi There my site ranking dropped suddenly today for my main keywords such as security companies in london and security services in london from first page to 4th-5th page. these keywords were ranked on homepage http://www.armstrongsecurity.co.uk/ other keywords from some internal pages, such as this one http://www.armstrongsecurity.co.uk/security-services/event-security-london.html theygot hit slightly and went couple of listings down the road for event security london, event security companies london as well. same slight hitting happened on this page for main keywords http://www.armstrongsecurity.co.uk/bodyguard-for-hire-london.html can anyone help me, how to get the rankings back? my site authority is around 60 which is far better than most sites ranking higher than me now. these are some problems that i understand so far. keyword rich anchor text link profile for my main keywords over optimised pages let me know if anything you might find suspicious on my site that i can fix either on site or in my link profile. looking forward to your help. thanks gill
White Hat / Black Hat SEO | | spciuk0 -
How to Not Scrap Content, but still Being a Hub
Hello Seomoz members. I'm relatively new to SEO, so please forgive me if my questions are a little basic. One of the sites I manage is GoldSilver.com. We sell gold and silver coins and bars, but we also have a very important news aspect to our site. For about 2-3 years now we have been a major hub as a gold and silver news aggregator. At 1.5 years ago (before we knew much about SEO), we switched from linking to the original news site to scraping their content and putting it on our site. The chief reason for this was users would click outbound to read an article, see an ad for a competitor, then buy elsewhere. We were trying to avoid this (a relatively stupid decision with hindsight). We have realized that the Search Engines are penalizing us, which I don't blame them for, for having this scraped content on our site. So I'm trying to figure out how to move forward from here. We would like to remain a hub for news related to Gold and Silver and not be penalized by SEs, but we also need to sell bullion and would like to avoid loosing clients to competitors through ads on the news articles. One of the solutions we are thinking about is perhaps using an iFrame to display the original url, but within our experience. An example is how trap.it does this (see attached picture). This way we can still control the experience some what, but are still remaining a hub. Thoughts? Thank you, nick 3dLVv
White Hat / Black Hat SEO | | nwright0