Handling duplicate content, whilst making both rank well
-
Hey MOZperts,
I run a marketplace called Zibbet.com and we have 1000s of individual stores within our marketplace. We are about to launch a new initiative giving all sellers their own stand-alone websites.
URL structure:
Marketplace URL: http://www.zibbet.com/pillowlink
Stand-alone site URL: http://pillowlink.zibbet.com (doesn't work yet)Essentially, their stand-alone website is a duplicate of their marketplace store. Same items (item title, description), same seller bios, same shop introduction content etc but it just has a different layout. You can scroll down and see a preview of the different pages (if that helps you visualize what we're doing), here.
My Questions: My desire is for both the sellers marketplace store and their stand-alone website to have good rankings in the SERPS.
- Is this possible?
- Do we need to add any tags (e.g. "rel=canonical") to one of these so that we're not penalized for duplicate content? If so, which one?
- Can we just change the meta data structure of the stand-alone websites to skirt around the duplicate content issue?
Keen to hear your thoughts and if you have any suggestions for how we can handle this best.
Thanks in advance!
-
No. We're actually not launching this initiative for SEO purposes. We just want to create value for our users and having their own stand-alone website is valuable to them.
I just want to make sure we're structured properly from an SEO point of view so that we don't compromise the SEO of our marketplace, or their stand-alone site.
Also, each site has unique content, but it is identical data to their marketplace store. So, every seller has a marketplace store (with items, a profile etc) AND a stand-alone website (with the same items, same profile etc, just designed differently and accessible via a sub-domain).
Hope that makes sense.
-
Thanks so much for your input.
I must admit, I'm not too familiar with Panda, so will need to do some digging there. We literally launched the new version of Zibbet 2 months ago, with different meta data etc, so I'm not sure how that affects it.
If we don't add the rel=canonical, do you think we'll get punished by Google?
-
If I understand correctly, you're asking how you can create a business model that fills up the search results with a bunch of sites that all have the same content. I think you're somewhat late to that party. The Google of today doesn't really let you do that and it's pretty good at preventing it. And if you were thinking of maybe linking back to your main site from all those dupes, I'd rethink that strategy, as well.
-
Hi,
First of all I would really take care of that Panda issue you have there: http://screencast.com/t/SzbL6hTFwWr
To answer your questions:
- Is this possible?
They can't rank both. You need to decide canonical version - the one to rule them all
- Do we need to add any tags (e.g. "rel=canonical") to one of these so that we're not penalized for duplicate content? If so, which one?
There are no duplicate content issues but yes, it's best that you chose and don't let google do that for you. Add rel canonical.
- Can we just change the meta data structure of the stand-alone websites to skirt around the duplicate content issue?
That won't help.
Overall you got hit by Panda already - you should take care of what content you push into the index as that's not the way to recover. before pushing more content into the index you should clean the site for all issues related with Panda.
Cheers.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Removing duplicate content
Due to URL changes and parameters on our ecommerce sites, we have a massive amount of duplicate pages indexed by google, sometimes up to 5 duplicate pages with different URLs. 1. We've instituted canonical tags site wide. 2. We are using the parameters function in Webmaster Tools. 3. We are using 301 redirects on all of the obsolete URLs 4. I have had many of the pages fetched so that Google can see and index the 301s and canonicals. 5. I created HTML sitemaps with the duplicate URLs, and had Google fetch and index the sitemap so that the dupes would get crawled and deindexed. None of these seems to be terribly effective. Google is indexing pages with parameters in spite of the parameter (clicksource) being called out in GWT. Pages with obsolete URLs are indexed in spite of them having 301 redirects. Google also appears to be ignoring many of our canonical tags as well, despite the pages being identical. Any ideas on how to clean up the mess?
Intermediate & Advanced SEO | | AMHC0 -
Pagination causing duplicate content problems
Hi The pagination on our website www.offonhols.com is causing duplicate content problems. Is the best solution adding add rel=”prev” / “next# to the hrefs As now the pagination links at the bottom of the page are just http://offonhols.com/default.aspx?dp=1
Intermediate & Advanced SEO | | offonhols
http://offonhols.com/default.aspx?dp=2
http://offonhols.com/default.aspx?dp=3
etc0 -
Reinforcing Rel Canonical? (Fixing Duplicate Content)
Hi Mozzers, We're having trouble with duplicate content between two sites, so we're looking to add some oomph to the rel canonical link elements we put on one of our sites pointing towards the other to help speed up the process and give Google a bigger hint. Would adding a hyperlink on the "copying" website pointing towards the "original" website speed this process up? Would we get in trouble if added about 80,000 links (1 on each product page) with a link to the matching product on the other site? For example, we could use text like "Buy XY product on Other Brand Name and receive 10% off!"
Intermediate & Advanced SEO | | Travis-W0 -
Press Release and Duplicate Content
Hello folks, We have been using Press Releases to promote our clients business for a couple of years and we have seen great results in referral traffic and SEO wise. Recently one of our clients requested us to publish the PR on their website as well as blast it out using PRWeb and Marketwire. I think that this is not going to be a duplicate content issue for our client's website since I believe that Google can recognize which content has been published first, but I will be more than happy to get some of the Moz community opinions. Thank you
Intermediate & Advanced SEO | | Aviatech0 -
Capitals in url creates duplicate content?
Hey Guys, I had a quick look around however I couldn't find a specific answer to this. Currently, the SEOmoz tools come back and show a heap of duplicate content on my site. And there's a fair bit of it. However, a heap of those errors are relating to random capitals in the urls. for example. "www.website.com.au/Home/information/Stuff" is being treated as duplicate content of "www.website.com.au/home/information/stuff" (Note the difference in capitals). Anyone have any recommendations as to how to fix this server side(keeping in mind it's not practical or possible to fix all of these links) or to tell Google to ignore the capitalisation? Any help is greatly appreciated. LM.
Intermediate & Advanced SEO | | CarlS0 -
Lots of optimized content but crappy rankings
Hi, I write content for the site http://www.healthchoices.ca. We were hit by Panda (a different issue that is resolved now) but even before that, I'd write an entire library of good content. An example: the plant sterols library. Here's an article as an example: http://www.healthchoices.ca/article/where-are-plant-sterols-phytosterols-found There are about eight on average in each medical library, and we cover topics from acne to sexual health. The other half of the business is a directory. We have thousands of local health are providers, a basic version with just an address, and a longer version where we optimize the text. Here's an example: http://www.healthchoices.ca/profile/ct-mri-scans/canada-diagnostic-centres We come up buried on the third page, far after directories with zero content or crappy content. What am I missing? I am getting very frustrated as I've been writing this stuff for a long time and nothing seems to come of it. Thanks so much, Erin
Intermediate & Advanced SEO | | erinhealthchoices0 -
Duplicate Content across 4 domains
I am working on a new project where the client has 5 domains each with identical website content. There is no rel=canonical. There is a great variation in the number of pages in the index for each of the domains (from 1 to 1250). OSE shows a range of linking domains from 1 to 120 for each domain. I will be strongly recommending to the client to focus on one website and 301 everything from the other domains. I would recommend focusing on the domain that has the most pages indexed and the most referring domains but I've noticed the client has started using one of the other domains in their offline promotional activity and it is now their preferred domain. What are your thoughts on this situation? Would it be better to 301 to the client's preferred domain (and lose a level of ranking power throught the 301 reduction factor + wait for other pages to get indexed) or stick with the highest ranking/most linked domain even though it doesn't match the client's preferred domain used for email addresses etc. Or would it better to use cross-domain canoncial tags? Thanks
Intermediate & Advanced SEO | | bjalc20110