Could we run into issues with duplicate content penalties if we were to borrow product descriptions?
-
Hello,
I work for an online retailer that has the opportunity to add a lot of SKUs to our site in a relatively short amount of time by borrowing content from another site (with their permission). There are a lot of positives for us to do this, but one big question we have is what the borrowed content will do to our search rankings (we normally write our own original content in house for a couple thousand SKUs). Organic search traffic brings in a significant chunk of our business and we definitely don't want to do something that would jeopardize our rankings.
Could we run into issues with duplicate content penalties if we were to use the borrowed product descriptions?
Is there a rule of thumb for what proportion of the site should be original content vs. duplicate content without running into issues with our search rankings?
Thank you for your help!
-
I think Alan and EGOL have summed it up nicely for you.
I have looked at a lot of Panda hit sites and one of the most common issues were e-commerce sites that consisted of primarily of stock product descriptions. Why would Google want to rank a site highly that just contains information that hundreds of other sites have?
If you've got a large chunk of your site containing duplicate descriptions like this then you can attract a Panda flag which can cause your whole site to not rank well, not just the product pages.
You could use the duplicate product descriptions if you had a large amount of original and helpful text around it. However, no one knows what the ratio is. If you have the ability to rewrite the product descriptions this is by far the best thing to do.
-
Just adding a point to this (and with reference to the other good points left by others) - Writing good product descriptions isn't actually that expensive!
It always seems it, as they are usually done in big batches. However on a per product basis they are pretty cheap. Do it well and you will not only improve the search results, but you can improve conversions and even make it more linkable.
Pick a product at random. Would it be worth a few £/$ to sell more of that item? If not remove it from the site anyway.
-
Adding a lot of SKUs to your site in a relatively short amount of time by borrowing content from another site sounds more like a bad sales pitch than a good "opportunity". If you don't want to put in jeopardy a significant chunk of your business, then simply drip the new sku's in as you get new content for them. The thin content's not likely to win you any new search traffic, so unless their addition is going to quickly increase sales from your existing traffic sources and quantities in dramatic fashion, why go down that road?
-
adding emphasis on the danger.
Duplicate product descriptions are the single most problematic issue ecommerce sites face from an SEO perspective. Not only are most canned descriptions so short as to cause product pages to be considered thin on content, copied/borrowed descriptions are more likely to be spread across countless sites.
While it may seem like an inordinate amount of time/cost, unique quality descriptions that are long enough to truly identify product pages as being worthy will go a long way to proving a site deserves ranking, trust.
-
You can hit Panda problems doing this. If you have lots of this content the rankings of your entire site could be damaged.
Best to write your own content, or use this content on pages that are not indexed until you have replaced with original content.
Or you could publish it to get in the index and replace as quickly as possible.
The site you are getting this content from could be damaged as well.
-
You definitely could run in to trouble here. Duplicate content of this type is meant to be dealt with on a page level basis. However if Google think it is manipulative then then it can impact on the domain as a whole. By "think" I really mean "if it matches certain patterns that manipulative sites use" - there is rarely an actual human review.
It is more complex than a simple percentage. Likely many factors are involved. However.. there is a solution!
You can simply add a no index tag to the product pages that have non-original content. That;ll keep them out of the index and keep you on the safe side of dupe issues.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Changing server location nearest to visitors? i am confused with the content part.
hi there, currently hosted in Singapore, and target audience is the US, john mueller said keep the url, content and cms the same. i am confused with the content part i have been tweaking the content for a month now because i have changed content on my site a day ago if i change the server the next day? is that bad? what should be done?
Algorithm Updates | | maria-cooper90 -
Would there be any benefit to creating multiple pages of the same content to target different titles?
Obviously, the duplicated pages would be canonical, but would there be a way of anchoring a page land by search term entry? For example: If you have a site that sells cars you could use this method but have a page that has (brand) cars for sale, finance options, best car for a family, how far will the (brand) car go for on a full tank and so on? Then making all the information blocks h2's but using the same H2s for the duplicated page titles. Then it gets complicated, If someone searches "best car for a family" and the page title for the duplicated page is clicked how would you anchor this user to the section of the page with this information? Could there be a benefit to doing this or would it just not work?
Algorithm Updates | | Evosite10 -
Ctr question with home page and product pages
do you believe that the advantage of targeting a search term on the home page is now worse off than before? as I understand it ctr is a big factor now And as far as i can see if two pages are equal on page etc the better ctr will win out, the issue with the home page is the serp stars cannot be used hence the ctr on a product page will be higher? I feel if you where able to get a home page up quicker (1 year instead of two) you still lost out in the end due to the product page winning on ctr? do you think this is correct?
Algorithm Updates | | BobAnderson0 -
UX & Product Page Design
Hi I have a question regarding UX testing. Is it best when testing a product page to: 1. Redesign and test the new page - if it works, test elements to see what worked. 2. Start testing element by element to see what has a positive impact. We have differing opinions within the company, and I'd like to hear some feedback from others in the industry. Thank you
Algorithm Updates | | BeckyKey0 -
Product shows as out of stock?
Hi, A clients product is showing as out of stock on Google search when I type in the brand/product name. When it is in fact in stock. They have fetched and also fetched and rendered but no joy. Yesterday it went back into stock, then today back out of stock again, but hasn't been out of stock at all on the website, so not sure why it is doing it. When I do the fetch it shows up as in stock at £29.99 on the search it shows as out of stock at £39.99 Platform is EKM Powershop. I contacted them first of all and they insisted the problem was nothing to do with them. Any ideas?
Algorithm Updates | | YNWA0 -
Site has disappeared since Panda 4 despite quality content, help!
Our site www.physicalwellbeing.co.uk has lost over 20 first page rankings since the end of May. I assume this is because of Panda 4.0. All content on the site is high quality and 100% unique, so we did not expect to get penalised. Although I read somewhere that if Google can't read particular js anymore they don't rank you as high. The site has not been blacklisted as all pages are showing in Google's index and there are no messages on webmaster tools. We have not taken part in any link schemes and have disavowed all low quality links that were pointing there just in case (after the penalty). Can anybody see anything on www.physicalwellbeing.co.uk that may have cause Panda update to affect it so negatively? Would really appreciate any help.
Algorithm Updates | | search_shop0 -
Content Caching Memory & Removal of 301 Redirect for Relieving Links Penalty
Hi, A client site has had very poor link legacy, stretching for over 5 years. I started the campaign a year ago, providing valuable good quality links. Link removals and creating a disavow to Google have been done, however after months and months of waiting nothing has happened. If anything, after the recent penguin update, results have been further affected. A 301 redirect was undertaken last year, consequently associating those bad links with the new site structure. I have since removed the 301 redirect in an attempt to detach this legacy, however with little success. I have read up on this and not many people appear to agree whether this will work. Therefore, my new decision is to start a fresh using a new domain, switching from the .com to .co.uk version, helping remove all legacy and all association with the spam ridden .com. However, my main concern with this is whether Google will forever cach content from the spammy .com and remember it, because the content on the new .co.uk site will be exactly the same (content of great quality, receiving hundreds of visitors each month from the blog section along) The problem is definitely link related and NOT content as I imagine people may first query. This could then cause duplicate content, knowing that this content pre-existed on another domain - I will implement a robots.txt file removing all of the .com site , as well as a no index no follow - and I understand you can present a site removal to Google within webmaster tools to help fast track the deindexation of the spammy .com - then once it has been deindexed, the new .co.uk site will go live with the exact same content. So my question is whether Google will then completely forget that this content has ever existed, allowing me to use exactly the same content on the new .co.uk domain without the threat of a duplicate content issue? Also, any insights or experience in the removal of a 301 redirect, detaching legacy and its success would also be very helpful! Thank you, Denver
Algorithm Updates | | ProdoDigital0 -
Do links count in syndicated content?
If I write a press release that goes viral and is syndicated all over do each of those links to my site in the syndications of the press release count and pass page rank with Google? Or does Google only count the link in the original press release? I heard that Google counts all the links for a time then eventually counts only one link from the original content and discounting all the other links as duplicate content. Any truth to this? Thanks mozzers! Ron10
Algorithm Updates | | Ron100