What´s the penalization extent applied by Google?
-
Hi!
I still don´t get this web site penalization applied by Google due to duplicate content.
My site has many of pages that were among the first positions for top keywords (A Photoshop web site).
Those pages were linked by sites like LifeHacker, BoingBoing, Microsiervos, SmashingMagazine, John Nack, and many other well known blogs.
After mid February 2012 everything went down the drain. I lost half of my traffic and my well ranked pages are now almost nowhere to be found.
I have plenty of ads in some pages of my site, and duplicate content (amazon product description only) in other pages of my site.
So, the good quality pages my site has, are no longer considered as good quality just because I have some duplicate content or ad filled pages?
I´m not complaining. I´m trying to understand this.
Google needs to serve good information to their visitors. But since they found some trash in my site, they decide to remove both the trash and the good information from the search engine?
That doesn´t sound logical to me. Why don´t they just remove the trash and leave the good content?
Of course, I understand that information is added everyday and some may come up with something better than mine, but dropping 40 or more places in the ranking sounds more like a penalty to me.
Again, I´m not complaining (although it sounds like I am!), just want to understand the reasons behind this.
Thanks,
Enrique
-
Yes, thanks Anthony. I will post back as soon (soon..?) as I find something.
Enrique
-
sometimes what you call natural google calls spammy or unnatural. Just sayin. Good luck. Post back with your findings. Im interested to see how things work out for you. Best regards.
-
Yes, thanks, I will check that. I was planning to add nofollow to the amazon pages, I will also check the anchors, but since they are all natural, any change I make will look artificial.
Enrique
-
Have you tried removing the amazon data feed from those pages? Just to see if that's is in fact what is impacting your rankings? What about the thousands of natural links pointing to your site? Are they all using varied anchor text or is it just five for your five main pages? If just five that could also be affecting your ranking.
-
Yes, I know that's the thing to do, but you must agree with me that it's something unnatural.
I have thousands of incoming links, and only exchanged or asked for less than 20 of those. The rest are natural. If I spend time analyzing links it would be something absolutely artificial.
The same goes with quality pages. Let's say that I have four or five pages that are the most referenced in my industry (just and example, of course). Visitors that read those pages get really good, top class information. But I have an Amazon datafeed in my site.
Suddenly, the information of those top quality pages are hidden from search results in Google because my site has an Amazon datafeed?
I know it's a simplistic example, but it can be translated as:
"A good article isn't good anymore just because of a site penalty"
It seems that Google is saying something like "Hey, you can't read this amazing article because it is from a site has lots of junk. So suck it up and read this article of a lesser quality but from from a pristine site!"
It is not about my site anymore, but about trying to understand the concept of it all. And of course it is an extreme example, but I think it is relevant.
-
No, google does care about good quality pages. Its just if you throw in a bunch of bad pages they dilute the goodness of your good pages. Once you clean yp duplicate content then i would suggest running a report on your inbound links. Check to see if your anchor text is spammy, or concentrating on only a few choice keywords. When it comes to link building you want to spread out the keywords to there isn't one or two money keywords banking on the anchor text.
Also, I would remove any inbound links from questionable directories. Once you do that I wold think you should see some significant gains in rankings.
-
Thanks! So it is clear that Google doesn´t care about single, good quality, pages with good quality links.
A good quality page needs a quality site to back it up.
Is that the criteria?
It sounds reasonable to me, but very difficult to repair.
Just for the records, my site isn´t trash or low quality, but it is an old site and has some quirks from old times: lots of directory entries with little content and datafeeds that used to work very well some years ago.
-
The trash part of your site affects the site as a whole, not specifically just the trash parts. If they did just that, then you would still benefit from using trash to promote your good pages.
Now from what I understand about penalties, there is a manual penalty and an algorythm or natural penalty.
The algo penalty can be easily fixed by addressing your penalty issue, which would be duplicate content. Clean up all duplicate content and you will be on your way to flying under the penalty radar so to speak. However, you will still need to add more quality content to make up for the removed or cleaned up duplicate content.
Once that takes place you should notice your ranking drop stabilize, and over time begin the crawl back up. This would be a good time to implement other strategies like social media and quality link building.
Now if its a manual penalty, then you need to clean up all duplicate content and ask for a manual review, and pray. Manual penalties are heard to overcome and will require much more work. Sometimes its best to just start with a new domain from scratch.
Hope this helps some.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Updating Old Content at Scale - Any Danger from a Google Penalty/Spam Perspective?
We've read a lot about the power of updating old content (making it more relevant for today, finding other ways to add value to it) and republishing (Here I mean changing the publish date from the original publish date to today's date - not publishing on other sites). I'm wondering if there is any danger of doing this at scale (designating a few months out of the year where we don't publish brand-new content but instead focus on taking our old blog posts, updating them, and changing the publish date - ~15 posts/month). We have a huge archive of old posts we believe we can add value to and publish anew to benefit our community/organic traffic visitors. It seems like we could add a lot of value to readers by doing this, but I'm a little worried this might somehow be seen by Google as manipulative/spammy/something that could otherwise get us in trouble. Does anyone have experience doing this or have thoughts on whether this might somehow be dangerous to do? Thanks Moz community!
On-Page Optimization | | paulz9990 -
Does Google avoid indexing pages that include registered trademark signs?
I am suspecting that Google often hesitates to index pages that have registered trademarks on them that are marked with a ®. For example EGOL® used in the title tag or in the tag at the top of the page. Registered trademarks are everywhere and most retail product pages contain at least one of them. However, most people use the registered trademark names as text in their writing without adding the registered trademark sign of ®. Have you experienced a problem getting such pages indexed or have you read any articles about how Google treats registered trademarks?
On-Page Optimization | | EGOL0 -
Google index new data from my website page
Hi All, We have pages which are created few weeks before hand for Movie reviews in those pages we add value with adding the Movie cast and crew info and what ever info possible before the movie releases. The the movie releases we watch the movies and write reviews which is 500+ words. Now the issue is the pages are indexed a week before... How can i have these review pages scanned immediately when i have the complete review as the review content is not indexed for 3 to 5 days and the first day or 2 is when its important for the reviews to be seen in Google. Regards
On-Page Optimization | | AlexisWithers0 -
Google Showing H1 Title Instead of Doc Title in Search Results?
I see this often for my website: Google displays my pages' H1 title instead of the document title in its search results. Is there any particular reason for this? Do we have any kind of control on this?
On-Page Optimization | | sbrault740 -
Will google regards www.example.com and www.example.com?331457 as the duplicate content?
Our site has some affiliates, and the affiliate id is the suffix following with the url "?xxxxxx". I can see Google Analytics regards www.example.com and www.example.com?331457 as the different page, but in fact they are exactly the same, the version www.example.com?331457 is the visit from our affiliate site. And yesterday I start up my Moz Pro membership, and in the crawl issues I see SEOMoz thinks www.example.com and www.example.com?331457 are duplicate content. Is this really an issue? Will the search engine thinks these two pages are duplicate content?? Thanks you guys My first question here, not too dumb I hope. -----------------Update---------------------- I should explain how our affiliates work. We are an eBook related software company, and anyone can apply an affiliate account on the transaction platform "RegNow" even without our permission because we have opened the affiliate door. When a visitor come to our order page from an affiliate site, the url will add the affiliate ID suffix "?xxxxxx", and it's combined in cookies. After the deal is done, the affiliate gets his commission. So no matter how I customize the url with URL Builder, there must be the suffix "?xxxxxx". It's the ID of our affiliate, or they will get nothing. So the key point is, will the suffix "?331457" makes Google think www.example.com and www.example.com?331457 are different pages and duplicate content?
On-Page Optimization | | JonnyGreenwood0 -
Google Indexing
Hi, We recently launched a new version of our site on the Magento platform. I submitted a new sitemap and on the first crawl only 7 pages out of 132 were indexed...a few days later and we now have 107 indexed (phew). My question is this....how on earth do i find out which pages are indexed and more importantly not indexed? For all i know they might be really important ones so I need to be able to identify the missing pages so i can work on getting them indexed. Nic
On-Page Optimization | | nicc19760 -
Indexed pages in Google webmaster tools
Hi Mozzers, Very quick question. Google WM tools interface has updated and I want to confirm I'm looking at the correct figure. If I look up 'Your site on the web' / 'search queries' / then the 'pages' - this is correct indexation figure yes? This differs from the 'site:' command but that's always the case. Can anyone confirm, Thanks
On-Page Optimization | | Bush_JSM0 -
Shall Google index a search result?
Hi, I've a website with about 1000 articles.Each article has one ore more keywords / tags. So I display these keywords at the article page and put a link to the intern search engine. (Like a tag cloud) The search engine lists als articles with the same keyword and creates a result page. This result page is indexed by Google. The search result contains the title of the article, a short description (150-300 chars.) and a link to the article. So, Google believes, that there are about 5.000 pages instead of 1.000 because auf the link to the search result pages. The old rule was for me: More pages in Google = better. But is this still true nowadays? Would be a "noindex, follow" better on these search result pages? (Is there a way to tell Google that this is a search result page?) Best wishes, Georg.
On-Page Optimization | | GeorgFranz0