SEOMOZ and non-duplicate duplicate content
-
Hi all,
Looking through the lovely SEOMOZ report, by far its biggest complaint is that of perceived duplicate content. Its hard to avoid given the nature of eCommerce sites that oestensibly list products in a consistent framework.
Most advice about duplicate content is about canonicalisation, but thats not really relevant when you have two different products being perceived as the same.
Thing is, I might have ignored it but google ignores about 40% of our site map for I suspect the same reason. Basically I dont want us to appear "Spammy". Actually we do go to a lot of time to photograph and put a little flavour text for each product (in progress).
I guess my question is, that given over 700 products, why 300ish of them would be considered duplicates and the remaning not?
Here is a URL and one of its "duplicates" according to the SEOMOZ report:
http://www.1010direct.com/DGV-DD1165-970-53/details.aspx
http://www.1010direct.com/TDV-019-GOLD-50/details.aspxThanks for any help people
-
The point I'm trying to get across is this:
"I asked the question of why these pages are considered duplicate, the answer appears to be : because textually they are even if visually they are not."
I don't think that's the complete answer, or even the most important part of the answer. Surely having mostly similar content across pages won't help, but as I've tried to point out, there are other factors that come into play here. It's not just about the content, but putting the content into context for the search engines. In order for them to understand what it is they're looking it, there's more that's important than just the content.
Michel
-
I think this highlights the fundamental problem with SEO and
eCommerce sites.We are all aware that the ultimate aim for search engines and
therefore ultimately SEO is to add value to users. But is "value" the
same for an eCommerce site as it is for a blog, or a travel information site or
a site offering information on health and advice?In my opinion, it is not. If I am looking to make a purchase, I
am looking for a site that is responsive, easy to navigate, has good imagery to
help me visualise, is secure and doesn’t clutter with in-your-face promotional
info, and of course offers value for money.Unique content therefore doesn’t really factor into it too much. Its hard enough for us, but I can only imagine how difficult it is for a company selling screws or rope, just how much creativity does that take to provide unique content for 3.5 inch brass screws over 2.5 inch steel ones?
The current mantra is to stop worrying about SEO tricks, and
focus on building a site with value. But this particular issue is an indication
we are still not there with that utopia yet.For example, as pointed out in the posts above .. these pages are considered duplicate, because by percentage the variable information is minimal; If you look at our product page we put the functionality of filling in your prescription below the product to make it
easier for the customer, but in order to solve the "percentage unique" issue, we would need to move that onto another page. Basically, we need to reduce value (convenience) to appear to add value (uniqueness).Anyway, little point complaining, I asked the question of why these pages are considered duplicate, the answer appears to be : because textually they are even if visually they are not.
I could be worrying about nothing, I believe all these pages are indexed (through crawling), its just a good proportion of our sitemap is being overlooked, I am assuming its perceived duplication as suggested in SEOMOZ. That in turn makes me concerned google is marking us down as spammy.
I appreciate all your comments.
Thanks
Paul
-
I do not agree. I see these kinds of pages on e-commerce websites on a daily basis. For webshops that sell only a certain kind of product, almost all product pages will look alike.
In this case, the H1 is different, the page title is different, and the description is different. This is only a small portion of the page but that's not uncommon, so I would argue that it cannot be just that.
I would look into URLs, marking up your data using http://schema.org/Product, possibly making small changes to accomodate the tags. For instance splitting up brand, color etc. so that you can mark them accordingly.
-
Tom has this spot on. Google doesn't only look for direct duplication, but also very similar, and these really are I'm afraid.
You need to find ways to make each page unique in its own right - let Google see that no two pages are the same and there is a real reason to rank them.
-
I wonder if the details.aspx has something to do with it?
www.1010direct.com/TDV-019-GOLD-50/details.aspx
www.1010direct.com/DGV-DD1165-970-53/details.aspxBasically, both pages are called details.aspx. Depending on how you look at it, you have 2 pages that are named the same (with mostly similar content, though not unusual for e-commerce websites) in different subfolders. I'm not sure if there's some kind of difference in the way Moz works, and if that's part of why Moz marks this as duplicate content?
Are you unable to create 'prettier' URL's? Such as:
www.1010direct.com/tim-dilsen-019-gold-50-glasses.aspx
www.1010direct.com/dolce-gabbana-dd1165-970-53-glasses.aspxWith or without the aspx of course.
-
I'm not surprised Moz is flagging those pages as duplicate content and I wouldn't be totally surprised if Google did in the future.
Put it this way, the pages are identical bar for a single sentence title description, a price and roughly a 20 word section describing the product. Everything else is identical. It's duplicate.
Look at it another through Google's eyes. Here's how the two pages look when crawled by Google:
(If that doesn't work, try yourself at http://www.seo-browser.com/)
Just look at how much text and HTML is shared between the two pages. Yes, there are key differences on the pages (namely the product), but the Google bot nor the Mozbot is going to recognise those elements when it crawls it.
Presuming Google ignores the site nav, it still has a bunch of text and crawlable elements that are shared - pretty much everything under the product description. It doesn't see the individual images and the flavour text is frankly too small to make any sort of dent in the duplicate content %.
I'd seriously recommend at revising how your product pages look - there's far too much repeated content per page (you can still promote these things on each page but in a much, much smaller way) and the individual descriptions for the products, in my eyes, are not substantial enough.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages with Duplicate Content
When I crawl my site through moz, it shows lots of Pages with Duplicate Content. The thing is all that pages are pagination pages. How should I solve this issue?
Technical SEO | | 100offdeal0 -
Duplicate content through product variants
Hi, Before you shout at me for not searching - I did and there are indeed lots of threads and articles on this problem. I therefore realise that this problem is not exactly new or unique. The situation: I am dealing with a website that has 1 to N (n being between 1 and 6 so far) variants of a product. There are no dropdown for variants. This is not technically possible short of a complete redesign which is not on the table right now. The product variants are also not linked to each other but share about 99% of content (obvious problem here). In the "search all" they show up individually. Each product-variant is a different page, unconnected in backend as well as frontend. The system is quite limited in what can be added and entered - I may have some opportunity to influence on smaller things such as enabling canonicals. In my opinion, the optimal choice would be to retain one page for each product, the base variant, and then add dropdowns to select extras/other variants. As that is not possible, I feel that the best solution is to canonicalise all versions to one version (either base variant or best-selling product?) and to offer customers a list at each product giving him a direct path to the other variants of the product. I'd be thankful for opinions, advice or showing completely new approaches I have not even thought of! Kind Regards, Nico
Technical SEO | | netzkern_AG0 -
Responsive Code Creating Duplicate Content Issue
Good morning, Our developers have recently created a new site for our agency. The site is responsive for mobile/tablets. I've just put the site through Screaming Frog and I've been informed of duplicate H2s. When I've looked at some of the page sources, there are some instances of duplicated H2s and duplicated content. These duplicates don't actually appear on the site, only in the code. When I asked the development guys about this, they advised this is duplicated because of the code for the responsive site. Will the site be negatively affected because of this? Not everything is duplicated, which leads me to believe it probably could have been designed better... but I'm no developer so don't know for sure. I've checked the code for other responsive sites and no duplicates can be found. Thanks in advance, Lewis
Technical SEO | | PeaSoupDigital0 -
Duplicate page content & titles on the same domain
Hey, My website: http://www.electromarket.co.uk is running Magento Enterprise. The issue I'm running into is that the URLs can be shortened and modified to display different things on the website itself. Here's a few examples. Product Page URL: http://www.electromarket.co.uk/speakers-audio-equipment/dj-pa-speakers/studio-bedroom-monitors/bba0051 OR I could remove everything in the URL and just have: http://www.electromarket.co.uk/bba0051 and the link will work just as well. Now my problem is, these two URL's load the same page title, same content, same everything, because essentially they are the very same web page. But how do I tell Google that? Do I need to tell Google that? And would I benefit by using a redirect for the shorter URLs? Thanks!
Technical SEO | | tomhall900 -
Container Page/Content Page Duplicate Content
My client has a container page on their website, they are using SiteFinity, so it is called a "group page", in which individual pages appear and can be scrolled through. When link are followed, they first lead to the group page URL, in which the first content page is shown. However, when navigating through the content pages, the URL changes. When navigating BACK to the first content page, the URL is that for the content page, but it appears to indexers as a duplicate of the group page, that is, the URL that appeared when first linking to the group page. The client updates this on the regular, so I need to find a solution that will allow them to add more pages, the new one always becoming the top page, without requiring extra coding. For instance, I had considered integrating REL=NEXT and REL=PREV, but they aren't going to keep that up to date.
Technical SEO | | SpokeHQ1 -
Duplicate content issue index.html vs non index.html
Hi I have an issue. In my client's profile, I found that the "index.html" are mostly authoritative than non "index.html", and I found that www. version is more authoritative than non www. The problem is that I find the opposite situation where non "index.html" are more authoritative than "index.html" or non www more authoritative than www. My logic would tell me to still redirect the non"index.html" to "index.html". Am I right? and in the case I find the opposite happening, does it matter if I still redirect the non"index.html" to "index.html"? The same question for www vs non www versions? Thank you
Technical SEO | | Ideas-Money-Art0 -
Thin/Duplicate Content
Hi Guys, So here's the deal, my team and I just acquired a new site using some questionable tactics. Only about 5% of the entire site is actually written by humans the rest of the 40k + (and is increasing by 1-2k auto gen pages a day)pages are all autogen + thin content. I'm trying to convince the powers that be that we cannot continue to do this. Now i'm aware of the issue but my question is what is the best way to deal with this. Should I noindex these pages at the directory level? Should I 301 them to the most relevant section where actual valuable content exists. So far it doesn't seem like Google has caught on to this yet and I want to fix the issue while not raising any more red flags in the process. Thanks!
Technical SEO | | DPASeo0 -
Are RSS Feeds deemed duplicate content?
If a website content management system includes built-in feeds of different categories that the client can choose from, does that endanger them of having duplicate content if their categories are the same as another client's feed? These feeds appear on templated home page designs by default. Just trying to figure out how big of an issue these feeds are in terms of duplicate content across clients' sites. Should I be concerned? Obviously, there's other content on the home page besides the feed and have not really seen negative effects, but could it be impacting results?
Technical SEO | | KyleNeuberger0