How are they avoiding duplicate content?
-
One of the largest stores in USA for soccer runs a number of whitelabel sites for major partners such as Fox and ESPN. However, the effect of this is that they are creating duplicate content for their products (and even the overall site structure is very similar). Take a look at:
http://www.worldsoccershop.com/23147.html
http://www.foxsoccershop.com/23147.html
http://www.soccernetstore.com/23147.html
You can see that practically everything is the same including:
-
product URL
-
product title
-
product description
My question is, why is Google not classing this as duplicate content? Have they coded for it in a certain way or is there something I'm missing which is helping them achieve rankings for all sites?
-
-
The answer is right in your question - "runs a number of whitelabel sites". As mentioned, it is largely due to the original publisher publishing the content first and getting indexed - from there, anytime the google bot stumbles across the same content - it will figure out that it has seen the content before, and attribute the ranking to the original. Something that google themselves covered last year here (although more specifically for news at the time).
Duplicate content unfortunately isn't just "not shown" by the search engines (imagine how "clean" the SERPS would be if that were the case!) it's just ranked lower than the original publisher that google is aware of. Occasionally you will get the odd page that will rank from a different domain - but that is usually due to being fresh content, I have seen this myself with my own content being aggregated by a large news site - they might outrank me on occasion for a day on one or two pieces - but my original url comes out on top in the end.
-
They rank as #1 for the relevant terms. It is very clear Google feels they are the original source of the content, and the other sites are duplicates.
I don't have a crystal ball to see the future, but based on current information, the original source site is not suffering in any manner.
-
Interesting feedback - are worldsoccershop (the original source) likely to suffer any penalties as a result of the whitelabel sites carrying the duplicate content?
-
Hey
I just did a search for some phrases I found on one of their product pages and I wrapped up this long query in double quotes.
"Large graffiti print on front that illustrates the club's famous players and history. The traditional blue jersey has gold details including team badge, adidas logo and sponsor design"
the results that are returned shows the worldsoccershop.com result first & second and therefore they seem to be an authority on this product description.
I have a client that is setting up a store to take on some rather big boys like notonthehighstreet.com and in this industry where they have several, established competitors for each product the big authority stores seem to rank for the generic product descriptions with no real issues.
This is ultimately difficult for the smaller stores as whilst they have less resources, pages on my clients site that use these duplicate descriptions are just getting filtered out of the results. We can see this filtering in action with very specific searches like the one above where we get the 'we have filtered out similar results' message in the search results and low and behold, my clients results are in those that are filtered.
So, to answer your original question:
They have not 'coded' anything in a specific way and there is nothing you are missing as such. They are just an authority site and as such are 'getting away with it' - which, for the smaller players, kind of sucks. That said, only the worldofsoccer pages are returned so the other sites could well be filtered out.
Still, as I am coaching our client, see this not as a problem but as an opportunity. By creating unique content, we can hopefully piggy back other more authoritative sites that are all returning an exact same product description and whilst I don't expect us to get 1st place, we can work towards first page and out of that filter.
Duplicate content is a massive problem and on this site we are working on there is one product description that copyscape tells us is on 300 other sites. Google wants to return rich result sets, some shops, some information, some pictures etc and not just 10 sets of the same thing so dare to be different and give them a reason to display your page.
Hope it helps
Marcus -
My question is, why is Google not classing this as duplicate content?
Why do you feel this content has not been flagged as duplicate content?
The reasonable search for these pages is Barcelona Soccer Jersey. Only one of the three sites has results for this term in the top 50, and it is the #1 and #2 results. If this was not duplicate content, you would expect to find the other two sites listed on the first page of google results as well.
The perfect search for the page (very longtail and unrealistic) is Barcelona 11/12 home soccer jersey. For this result, the worldsoccershop.com site ranks as #1 and 3, the foxsoccershop ranks as #8 which is a big drop down considering the content is the same, and the soccernetstore.com site is not in the top 50 results.
The other two sites have clearly been identified as duplicate content or are otherwise being penalized quite severely.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Possible duplicate content issues on same page with urls to multiple tabs?
Hello everyone! I'm first time here, and glad to be part of Moz community! Jumping right into the question I have. For a type of pages we have on our website, there are multiple tabs on each page. To give an example, let's say a page is for the information about a place called "Ladakh". Now the various urls that the page is accessible from, can take the form of: mywanderlust.in/place/ladakh/ mywanderlust.in/place/ladakh/photos/ mywanderlust.in/place/ladakh/places-to-visit/ and so on. To keep the UX smooth when the user switches from one tab to another, we load everything in advance with AJAX but it remains hidden till the user switches to the required tab. Now since the content is actually there in the html, does Google count it as duplicate content? I'm afraid this might be the case as when I Google for a text that's visible only on one of the tabs, I still see all tabs in Google results. I also see internal links on GSC to say a page mywanderlust.in/questions which is only supposed to be linked from one tab, but GSC telling internal links to this page (mywanderlust.in/questions) from all those 3 tabs. Also, Moz Pro crawl reports informed me about duplicate content issues, although surprisingly it says the issue exists only on a small fraction of our indexable pages. Is it hurting our SEO? Any suggestions on how we could handle the url structure better to make it optimal for indexing. FWIW, we're using a fully responsive design with the displayed content being exactly same for both desktop and mobile web. Thanks a ton in advance!
Intermediate & Advanced SEO | | atulgoyal0 -
Country Code Top Level Domains & Duplicate Content
Hi looking to launch in a new market, currently we have a .com.au domain which is geo-targeted to Australia. We want to launch in New Zealand which is ends with .co.nz If i duplicate the Australian based site completely on the new .co.nz domain name, would i face duplicate content issues from a SEO standpoint?
Intermediate & Advanced SEO | | jayoliverwright
Even though it's on a completely separate country code. Or is it still advised tosetup hreflang tag across both of the domains? Cheers.0 -
Same content pages in different versions of Google - is it duplicate>
Here's my issue I have the same page twice for content but on different url for the country, for example: www.example.com/gb/page/ and www.example.com/us/page So one for USA and one for Great Britain. Or it could be a subdomain gb. or us. etc. Now is it duplicate content is US version indexes the page and UK indexes other page (same content different url), the UK search engine will only see the UK page and the US the us page, different urls but same content. Is this bad for the panda update? or does this get away with it? People suggest it is ok and good for localised search for an international website - im not so sure. Really appreciate advice.
Intermediate & Advanced SEO | | pauledwards0 -
About robots.txt for resolve Duplicate content
I have a trouble with Duplicate content and title, i try to many way to resolve them but because of the web code so i am still in problem. I decide to use robots.txt to block contents that are duplicate. The first Question: How do i use command in robots.txt to block all of URL like this: http://vietnamfoodtour.com/foodcourses/Cooking-School/
Intermediate & Advanced SEO | | magician
http://vietnamfoodtour.com/foodcourses/Cooking-Class/ ....... User-agent: * Disallow: /foodcourses ( Is that right? ) And the parameter URL: h
ttp://vietnamfoodtour.com/?mod=vietnamfood&page=2
http://vietnamfoodtour.com/?mod=vietnamfood&page=3
http://vietnamfoodtour.com/?mod=vietnamfood&page=4 User-agent: * Disallow: /?mod=vietnamfood ( Is that right? i have folder contain module, could i use: disallow:/module/*) The 2nd question is: Which is the priority " robots.txt" or " meta robot"? If i use robots.txt to block URL, but in that URL my meta robot is "index, follow"0 -
Advice needed on how to handle alleged duplicate content and titles
Hi I wonder if anyone can advise on something that's got me scratching my head. The following are examples of urls which are deemed to have duplicate content and title tags. This causes around 8000 errors, which (for the most part) are valid urls because they provide different views on market data. e.g. #1 is the summary, while #2 is 'Holdings and Sector weightings'. #3 is odd because it's crawling the anchored link. I didn't think hashes were crawled? I'd like some advice on how best to handle these, because, really they're just queries against a master url and I'd like to remove the noise around duplicate errors so that I can focus on some other true duplicate url issues we have. Here's some example urls on the same page which are deemed as duplicates. 1) http://markets.ft.com/Research/Markets/Tearsheets/Summary?s=IVPM:LSE http://markets.ft.com/Research/Markets/Tearsheets/Holdings-and-sectors-weighting?s=IVPM:LSE http://markets.ft.com/Research/Markets/Tearsheets/Summary?s=IVPM:LSE&widgets=1 What's the best way to handle this?
Intermediate & Advanced SEO | | SearchPM0 -
Is this duplicate content something to be concerned about?
On the 20th February a site I work on took a nose-dive for the main terms I target. Unfortunately I can't provide the url for this site. All links have been developed organically so I have ruled this out as something which could've had an impact. During the past 4 months I've cleaned up all WMT errors and applied appropriate redirects wherever applicable. During this process I noticed that mydomainname.net contained identical content to the main mydomainname.com site. Upon discovering this problem I 301 redirected all .net content to the main .com site. Nothing has changed in terms of rankings since doing this about 3 months ago. I also found paragraphs of duplicate content on other sites (competitors in different countries). Although entire pages haven't been copied there is still enough content to highlight similarities. As this content was written from scratch and Google would've seen this within it's crawl and index process I wanted to get peoples thoughts as to whether this is something I should be concerned about? Many thanks in advance.
Intermediate & Advanced SEO | | bfrl0 -
Duplicate content even with 301 redirects
I know this isn't a developer forum but I figure someone will know the answer to this. My site is http://www.stadriemblems.com and I have a 301 redirect in my .htaccess file to redirect all non-www to www and it works great. But SEOmoz seems to think this doesn't apply to my blog, which is located at http://www.stadriemblems.com/blog It doesn't seem to make sense that I'd need to place code in every .htaccess file of every sub-folder. If I do, what code can I use? The weirdest part about this is that the redirecting works just fine; it's just SEOmoz's crawler that doesn't seem to be with the program here. Does this happen to you?
Intermediate & Advanced SEO | | UnderRugSwept0 -
What constitutes duplicate content?
I have a website that lists various events. There is one particular event at a local swimming pool that occurs every few months -- for example, once in December 2011 and again in March 2012. It will probably happen again sometime in the future too. Each event has its own 'event' page, which includes a description of the event and other details. In the example above the only thing that changes is the date of the event, which is in an H2 tag. I'm getting this as an error in SEO Moz Pro as duplicate content. I could combine these pages, since the vast majority of the content is duplicate, but this will be a lot of work. Any suggestions on a strategy for handling this problem?
Intermediate & Advanced SEO | | ChatterBlock0