Who gets punished for duplicate content?
-
What happens if two domains have duplicate content? Do both domains get punished for it, or just one? If so, which one?
-
Neither. Duplicate content is not a punishment. It is a relevance issue. Google simply thinks the duplicate site is a clone of many sites and there irrelevant. Some sites with greater authority it does work for - but google will only pick one site to display. Hence the other site considers it a punishment as they have the same content - but not displayed.
Think of it in fashion, people do not usually line of for mass market product, they line up for the limited edition. Google does not want to show mass market, the same thing over and over again - it wants to show unique relevant content.
-
Hi Thomas,
The ill-effect of duplicate content isn't always logical, but generally it follows certain rules. Firstly, it's not so much a 'punishment', as search engines' confusion: if two pages are showing the same content, how does it know which result is going to be better for the user? Google, for example, are forced to choose what they think the best result will be, and if there's duplicate content it lessens your chances of being chosen.
Google don't specifically give penalties for duplicate content, but some practices are risky, for example scraping content from websites without permission. It depends how the content has been duplicated. (See this video from Google's John Mueller)
This post on Moz also clarifies the issues around duplicate content very well.
What type of duplicate content are you worried about? Do you own both of the websites for which you're worried?
Thanks, hope this helps.
Zoe
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Do mobile and desktop sites that pull content from the same source count as duplicate content?
We are about to launch a mobile site that pulls content from the same CMS, including metadata. They both have different top-level domains, however (www.abcd.com and www.m.abcd.com). How will this affect us in terms of search engine ranking?
Technical SEO | | ovenbird0 -
Database driven content producing false duplicate content errors
How do I stop the Moz crawler from creating false duplicate content errors. I have yet to submit my website to google crawler because I am waiting to fix all my site optimization issues. Example: contactus.aspx?propid=200, contactus.aspx?propid=201.... these are the same pages but with some old url parameters stuck on them. How do I get Moz and Google not to consider these duplicates. I have looked at http://moz.com/learn/seo/duplicate-content with respect to Rel="canonical" and I think I am just confused. Nick
Technical SEO | | nickcargill0 -
Duplicate Content from Multiple Sources Cross-Domain
Hi Moz Community, We have a client who is legitimately repurposing, or scraping, content from site A to site B. I looked into it and Google recommends the cross-domain rel=canonical tag below: http://googlewebmastercentral.blogspot.com/2009/12/handling-legitimate-cross-domain.html The issue is it is not a one to one situation. In fact site B will have several pages of content from site A all on one URL. Below is an example of what they are trying to accomplish. EX - www.siteB.com/apples-and-oranges is made up of content from www.siteA.com/apples & www.siteB.com/oranges So with that said, are we still in fear of getting hit for duplicate content? Should we add multiple rel=canonical tags to reflect both pages? What should be our course of action.
Technical SEO | | SWKurt0 -
Why are some pages now duplicate content?
It is probably a silly question, but all of a sudden, the following pages of one of my clients are reported as Duplicate content. I cannot understand why. They weren't before... http://www.ciaoitalia.nl/product/pizza-originale/mediterranea-halal
Technical SEO | | MarketingEnergy
http://www.ciaoitalia.nl/product/pizza-originale/gyros-halal
http://www.ciaoitalia.nl/product/pizza-originale/döner-halal
http://www.ciaoitalia.nl/product/pizza-originale/vegetariana
http://www.ciaoitalia.nl/product/pizza-originale/seizoen-pizza-estate
http://www.ciaoitalia.nl/product/pizza-originale/contadina
http://www.ciaoitalia.nl/product/pizza-originale/4-stagioni
http://www.ciaoitalia.nl/product/pizza-originale/shoarma Thanks for any help in the right direction 🙂 | |
| |
| |
| |
| |
| |
| |
| | <colgroup><col style="mso-width-source: userset; mso-width-alt: 17225; width: 353pt;" width="471"></colgroup>
| http://www.ciaoitalia.nl/product/pizza-originale/mediterranea-halal |
| http://www.ciaoitalia.nl/product/pizza-originale/gyros-halal |
| http://www.ciaoitalia.nl/product/pizza-originale/döner-halal |
| http://www.ciaoitalia.nl/product/pizza-originale/vegetariana |
| http://www.ciaoitalia.nl/product/pizza-originale/seizoen-pizza-estate |
| http://www.ciaoitalia.nl/product/pizza-originale/contadina |
| http://www.ciaoitalia.nl/product/pizza-originale/4-stagioni |
| http://www.ciaoitalia.nl/product/pizza-originale/shoarma |0 -
If two websites pull the same content from the same source in a CMS, does it count as duplicate content?
I have a client who wants to publish the same information about a hotel (summary, bullet list of amenities, roughly 200 words + images) to two different websites that they own. One is their main company website where the goal is booking, the other is a special program where that hotel is featured as an option for booking under this special promotion. Both websites are pulling the same content file from a centralized CMS, but they are different domains. My question is two fold: • To a search engine does this count as duplicate content? • If it does, is there a way to configure the publishing of this content to avoid SEO penalties (such as a feed of content to the microsite, etc.) or should the content be written uniquely from one site to the next? Any help you can offer would be greatly appreciated.
Technical SEO | | HeadwatersContent0 -
How much to change to avoid duplicate content?
Working on a site for a dentist. They have a long list of services that they want us to flesh out with text. They provided a bullet list of services, we're trying to get 1 to 2 paragraphs of text for each. Obviously, we're not going to write this off the top of our heads. We're pulling text from other sources and trying to rework. The question is, how much rephrasing do we have to do to avoid a duplicate content penalty? Do we make sure there are changes per paragraph, sentence, or phrase? Thanks! Eric
Technical SEO | | ericmccarty0 -
Thin/Duplicate Content
Hi Guys, So here's the deal, my team and I just acquired a new site using some questionable tactics. Only about 5% of the entire site is actually written by humans the rest of the 40k + (and is increasing by 1-2k auto gen pages a day)pages are all autogen + thin content. I'm trying to convince the powers that be that we cannot continue to do this. Now i'm aware of the issue but my question is what is the best way to deal with this. Should I noindex these pages at the directory level? Should I 301 them to the most relevant section where actual valuable content exists. So far it doesn't seem like Google has caught on to this yet and I want to fix the issue while not raising any more red flags in the process. Thanks!
Technical SEO | | DPASeo0 -
How to publish duplicate content legitimately without Panda problems
Let's imagine that you own a successful website that publishes a lot of syndicated news articles and syndicated columnists. Your visitors love these articles and columns but the search engines see them as duplicate content. You worry about being viewed as a "content farm" because of this duplicate content and getting the Panda penalty. So, you decide to continue publishing the content and use... <meta name="robots" content="noindex, follow"> This allows you do display the content for your visitors but it should stop the search engines from indexing any pages with this code. It should also allow robots to spider the pages and pass link value through them. I have two questions..... If you use "noindex" will that be enough to prevent your site from being considered as a content farm? Is there a better way to continue publication of syndicated content but protect the site from duplicate content problems?
Technical SEO | | EGOL0