What´s the penalization extent applied by Google?
-
Hi!
I still don´t get this web site penalization applied by Google due to duplicate content.
My site has many of pages that were among the first positions for top keywords (A Photoshop web site).
Those pages were linked by sites like LifeHacker, BoingBoing, Microsiervos, SmashingMagazine, John Nack, and many other well known blogs.
After mid February 2012 everything went down the drain. I lost half of my traffic and my well ranked pages are now almost nowhere to be found.
I have plenty of ads in some pages of my site, and duplicate content (amazon product description only) in other pages of my site.
So, the good quality pages my site has, are no longer considered as good quality just because I have some duplicate content or ad filled pages?
I´m not complaining. I´m trying to understand this.
Google needs to serve good information to their visitors. But since they found some trash in my site, they decide to remove both the trash and the good information from the search engine?
That doesn´t sound logical to me. Why don´t they just remove the trash and leave the good content?
Of course, I understand that information is added everyday and some may come up with something better than mine, but dropping 40 or more places in the ranking sounds more like a penalty to me.
Again, I´m not complaining (although it sounds like I am!), just want to understand the reasons behind this.
Thanks,
Enrique
-
Yes, thanks Anthony. I will post back as soon (soon..?) as I find something.
Enrique
-
sometimes what you call natural google calls spammy or unnatural. Just sayin. Good luck. Post back with your findings. Im interested to see how things work out for you. Best regards.
-
Yes, thanks, I will check that. I was planning to add nofollow to the amazon pages, I will also check the anchors, but since they are all natural, any change I make will look artificial.
Enrique
-
Have you tried removing the amazon data feed from those pages? Just to see if that's is in fact what is impacting your rankings? What about the thousands of natural links pointing to your site? Are they all using varied anchor text or is it just five for your five main pages? If just five that could also be affecting your ranking.
-
Yes, I know that's the thing to do, but you must agree with me that it's something unnatural.
I have thousands of incoming links, and only exchanged or asked for less than 20 of those. The rest are natural. If I spend time analyzing links it would be something absolutely artificial.
The same goes with quality pages. Let's say that I have four or five pages that are the most referenced in my industry (just and example, of course). Visitors that read those pages get really good, top class information. But I have an Amazon datafeed in my site.
Suddenly, the information of those top quality pages are hidden from search results in Google because my site has an Amazon datafeed?
I know it's a simplistic example, but it can be translated as:
"A good article isn't good anymore just because of a site penalty"
It seems that Google is saying something like "Hey, you can't read this amazing article because it is from a site has lots of junk. So suck it up and read this article of a lesser quality but from from a pristine site!"
It is not about my site anymore, but about trying to understand the concept of it all. And of course it is an extreme example, but I think it is relevant.
-
No, google does care about good quality pages. Its just if you throw in a bunch of bad pages they dilute the goodness of your good pages. Once you clean yp duplicate content then i would suggest running a report on your inbound links. Check to see if your anchor text is spammy, or concentrating on only a few choice keywords. When it comes to link building you want to spread out the keywords to there isn't one or two money keywords banking on the anchor text.
Also, I would remove any inbound links from questionable directories. Once you do that I wold think you should see some significant gains in rankings.
-
Thanks! So it is clear that Google doesn´t care about single, good quality, pages with good quality links.
A good quality page needs a quality site to back it up.
Is that the criteria?
It sounds reasonable to me, but very difficult to repair.
Just for the records, my site isn´t trash or low quality, but it is an old site and has some quirks from old times: lots of directory entries with little content and datafeeds that used to work very well some years ago.
-
The trash part of your site affects the site as a whole, not specifically just the trash parts. If they did just that, then you would still benefit from using trash to promote your good pages.
Now from what I understand about penalties, there is a manual penalty and an algorythm or natural penalty.
The algo penalty can be easily fixed by addressing your penalty issue, which would be duplicate content. Clean up all duplicate content and you will be on your way to flying under the penalty radar so to speak. However, you will still need to add more quality content to make up for the removed or cleaned up duplicate content.
Once that takes place you should notice your ranking drop stabilize, and over time begin the crawl back up. This would be a good time to implement other strategies like social media and quality link building.
Now if its a manual penalty, then you need to clean up all duplicate content and ask for a manual review, and pray. Manual penalties are heard to overcome and will require much more work. Sometimes its best to just start with a new domain from scratch.
Hope this helps some.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Tag Clouds in Google Despite Canonical Links for Single Tags/Articles
I am frustrated to see a lot tag clouds in Google even though I programmed my tagged pages to display a canonical link to the linking article if the is only one result for the tag cloud. The goal to to make sure that the article, which is of better quality than the tag page, ends up in Google without a bunch of thin tag pages getting in there. For instance this article should be in Google and this tag should not be because that tag has a canonical URL for that article. I do not have a lot of experience with tag cloud SEO because I prefer to limit such pages to categories, but I have found tag clouds to be important for aggregating information for specific issues, people, or places that are not already a site category. Some tags I have used to power social media pages that update automatically from RSS feeds for their related tag archives. That is quite useful for pages like that. Should I start using Meta noindex for those instead of rel canonical? I have already done that for author profiles because author profiles get a lot of on site links compared to individual articles because my gridviews use javascript for paging. The same is true for the tags, so if a tag is tagged in 30 articles it will have links from 30 articles but if those articles are not in the latest 20 for that tag only the latest 20 will have links back from the tag archive. I also suspect having a lot of tag pages with little content to negatively impact my indexing rate. I will see a number of recent tag pages added before new articles.
On-Page Optimization | | CopBlaster.com0 -
Google Stopped Displaying Rich Snippets Stars
Background Google stopped displaying stars next to the website "szallas.hu" in the SERPs, eg: http://goo.gl/RXSS58 Google used to display the stars there is no technical problem, since I can see the stars in the Rich Snippet Testing Tool: http://goo.gl/bQUSNr Why did Google stop showing the stars? What can I do? I already filled in a form at https://support.google.com/webmasters/contact/rich_snippets_feedback.
On-Page Optimization | | Sved0 -
When You Add a Robots.txt file to a website to block certain URLs, do they disappear from Google's index?
I have seen several websites recently that have have far too many webpages indexed by Google, because for each blog post they publish, Google might index the following: www.mywebsite.com/blog/title-of-post www.mywebsite.com/blog/tag/tag1 www.mywebsite.com/blog/tag/tag2 www.mywebsite.com/blog/category/categoryA etc My question is: if you add a robots.txt file that tells Google NOT to index pages in the "tag" and "category" folder, does that mean that the previously indexed pages will eventually disappear from Google's index? Or does it just mean that newly created pages won't get added to the index? Or does it mean nothing at all? thanks for any insight!
On-Page Optimization | | williammarlow0 -
Will google regards www.example.com and www.example.com?331457 as the duplicate content?
Our site has some affiliates, and the affiliate id is the suffix following with the url "?xxxxxx". I can see Google Analytics regards www.example.com and www.example.com?331457 as the different page, but in fact they are exactly the same, the version www.example.com?331457 is the visit from our affiliate site. And yesterday I start up my Moz Pro membership, and in the crawl issues I see SEOMoz thinks www.example.com and www.example.com?331457 are duplicate content. Is this really an issue? Will the search engine thinks these two pages are duplicate content?? Thanks you guys My first question here, not too dumb I hope. -----------------Update---------------------- I should explain how our affiliates work. We are an eBook related software company, and anyone can apply an affiliate account on the transaction platform "RegNow" even without our permission because we have opened the affiliate door. When a visitor come to our order page from an affiliate site, the url will add the affiliate ID suffix "?xxxxxx", and it's combined in cookies. After the deal is done, the affiliate gets his commission. So no matter how I customize the url with URL Builder, there must be the suffix "?xxxxxx". It's the ID of our affiliate, or they will get nothing. So the key point is, will the suffix "?331457" makes Google think www.example.com and www.example.com?331457 are different pages and duplicate content?
On-Page Optimization | | JonnyGreenwood0 -
Can Your Site Get Penalized For Keyword Stuffing On An 'Untarged' Keyword?
My site has dropped since the EMD/Panda 20 roll out and I am looking for reasons why. I am looking at Keyword Stuffing as one potential problem. My web site is on the topic of WordPress Security with that being the main keyword I want to target. Now I can limit the number of occurrences of 'wordpress security' to below the recommended 15, but it is impossible to do this for 'wordpress' without severely compromising the user experience. I've got other content on topics such as WordPress Backup and WordPress Security Plugins etc, so obviously the word 'wordpress' is bound to appear frequently. Is there a risk that Google will penalize me for Keyword Stuffing on 'wordpress' and thus pull down the site or page for other keywords? Or would it simply mean I won't be able to rank for 'wordpress' (which I am quite happy about)? Thanks!
On-Page Optimization | | andersvin0 -
Site: command and intitle: command in Google changed?
Hi Mozzers, I'm seeing some changes in Google when using certain commands I've used for ages. I'm trying to spot cananical issues by using this search site:www.mysite.com intitle:"keyword" This used to list all pages in the index on a certain site with the keyword in the title. Now I'm getting weird results and sometimes results from other sites - not the one specified in the site: command. Anyone else seeing this? Thanks B
On-Page Optimization | | Bush_JSM0 -
Does hosting server location affect my local google page rank
Hi All, This is my first (of many I would say) questions. My clients site is in Ireland but the hosting service is in Canada. So when i use the MozBar it states that the IP is in Canada. Will this affect my page ranking in Dublin, Ireland? The company is a plumbing service so we would only want local Dublin customers. Thanks so much and I hope someone can help me out. cheers, Aidan
On-Page Optimization | | aidanlawlor0 -
Google found bad links delete them or 301 redirect?
we went into our google account and saw about 70 bad links that they found on our site. what's the best thing to do, seo-wise: should we go into the pages that have the bad links and delete them from the html code, or re-direct them in our htaccess script?
On-Page Optimization | | DerekM880