I'm setting up my online store in wordpress/woocommerce and want to avoid duplicate content.
-
Hi Mozers,
Apparently I'm using unique content in the short description area and it displays on the pages next to the product photo which is great how it is, but adding informational description repeating on every product page going to hurt us in SEO?
A. See here an actual product - (flagged for thin content in OSE)
B. This is how i would like to set each product page to improve them: See here a sample product with additional information/content.
Here's my question: Setting my product pages to the B version would be considered as duplicate content by google? -
It's not so much that this is going to hurt you but it also won't really help. You're far better off putting the effort into creating genuinely helpful content for each one. It's time-consuming but you only get out what you put in!
You're not going to get a manual penalty for this sort of thing but it isn't exactly a great quality signal to have most pages on your site showing the same content over and over.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I'm stumped!
I'm hoping to find a real expert to help out with this. TL;DR Our visibility in search has started tanking and I cannot figure out why. The whole story: In fall of 2015 I started working with Convention Nation (www.conventionnation.com). The client is trying to build a resource for convention and tradeshow attendees that would help them identify the events that will help them meet their goals (learning, networking, sales, whatever). They had a content team overseas that spent their time copy/pasting event information into our database. At the time, I identified several opportunities to improve SEO: Create and submit a sitemap Add meaningful metas Fix crawl errors On-page content uniqueification and optimization for most visible events (largest audience likely to search) Regular publishing and social media Over nine months, we did these things and saw search visibility, average rank and CTR all double or better. There was still one problem, and that is created by our specific industry. I'll use a concrete example: MozCon. This event happens once a year and there are enough things that are the same about it every year (namely, the generalized description of the event, attendees and outcomes) that the 2015 page was getting flagged as a duplicate of 2016. The event content for most of our events was pretty thin anyway, and much of it was duplicated from other sources, so we implemented a feature that grouped recurring events. My thinking was that this would reduce the perception of duplicate or obsolete content and links and provide a nice backlink opportunity. I expected a dip after we deployed this grouping feature, that's been consistent with other bulk content changes we've made to the site, but we are not recovering from the dip. In fact, our search visibility and traffic are dropping every week. So, the current state of things is this: Clean crawl reports: No errors reported by Moz or Google Moz domain authority: 20; Spam score 2/17 We're a little thin on incoming links, but steady growth in both social media and backlinks Continuing to add thin/duplicate content for unique events at the rate of 200 pages/mo Adding solid, unique strategic content at the rate of 15 pages/mo I just cannot figure out where we've gone astray. Is there anything other than the thin/copied content that could be causing this? It wasn't hurting us before we grouped the events... What could possibly account for this trend? Help me, Moz Community, you're my only hope! Lindsay
Intermediate & Advanced SEO | | LindsayDayton0 -
Client wants to remove mobile URLs from their sitemap to avoid indexing issues. However this will require SEVERAL billing hours. Is having both mobile/desktop URLs in a sitemap really that detrimental to search indexing?
We had an enterprise client ask to remove mobile URLs from their sitemaps. For their website both desktop & mobile URLs are combined into one sitemap. Their website has a mobile template (not a responsive website) and is configured properly via Google's "separate URL" guidelines. Our client is referencing a statement made from John Mueller that having both mobile & desktop sitemaps can be problematic for indexing. Here is the article https://www.seroundtable.com/google-mobile-sitemaps-20137.html
Intermediate & Advanced SEO | | RosemaryB
We would be happy to remove the mobile URLs from their sitemap. However this will unfortunately take several billing hours for our development team to implement and QA. This will end up costing our client a great deal of money when the task is completed. Is it worth it to remove the mobile URLs from their main website to be in adherence to John Mueller's advice? We don't believe these extra mobile URLs are harming their search indexing. However we can't find any sources to explain otherwise. Any advice would be appreciated. Thx.0 -
Duplicate content within sections of a page but not full page duplicate content
Hi, I am working on a website redesign and the client offers several services and within those services some elements of the services crossover with one another. For example, they offer a service called Modelling and when you click onto that page several elements that build up that service are featured, so in this case 'mentoring'. Now mentoring is common to other services therefore will feature on other service pages. The page will feature a mixture of unique content to that service and small sections of duplicate content and I'm not sure how to treat this. One thing we have come up with is take the user through to a unique page to host all the content however some features do not warrant a page being created for this. Another idea is to have the feature pop up with inline content. Any thoughts/experience on this would be much appreciated.
Intermediate & Advanced SEO | | J_Sinclair0 -
I'm Looking SEO Person for my project.
Hi Guys. I'm looking seo person who can help me in my project. I'm losing ranking day by day from last 2 months. For more detail and price please PM me. Thanks
Intermediate & Advanced SEO | | KLLC0 -
Penalised for duplicate content, time to fix?
Ok, I accept this one is my fault but wondering on time scales to fix... I have a website and I put an affiliate store on it, using merchant datafeeds in a bid to get revenue from the site. This was all good, however, I forgot to put noindex on the datafeed/duplicate content pages and over a period of a couple of weeks the traffic to the site died. I have since nofollowed or removed the products but some 3 months later my site still will not rank for the keywords it was ranking for previously. It will not even rank if I type in the sites' name (bright tights). I have searched for the name using bright tights, "bright tights" and brighttights but none of them return the site anywhere. I am guessing that I have been hit with a drop x place penalty by Google for the duplicate content. What is the easiest way around this? I have no warning about bad links or the such. Is it worth battling on trying to get the domain back or should I write off the domain, buy a new one and start again but minus the duplicate content? The goal of having the duplicate content store on the site was to be able to rank the category pages in the store which had unique content on so there were no problems with that which I could foresee. Like Amazon et al, the categories would have lists of products (amongst other content) and you would click through to the individual product description - the duplicate page. Thanks for reading
Intermediate & Advanced SEO | | Grumpy_Carl0 -
What constitutes duplicate content?
I have a website that lists various events. There is one particular event at a local swimming pool that occurs every few months -- for example, once in December 2011 and again in March 2012. It will probably happen again sometime in the future too. Each event has its own 'event' page, which includes a description of the event and other details. In the example above the only thing that changes is the date of the event, which is in an H2 tag. I'm getting this as an error in SEO Moz Pro as duplicate content. I could combine these pages, since the vast majority of the content is duplicate, but this will be a lot of work. Any suggestions on a strategy for handling this problem?
Intermediate & Advanced SEO | | ChatterBlock0 -
Google consolidating link juice on duplicate content pages
I've observed some strange findings on a website I am diagnosing and it has led me to a possible theory that seems to fly in the face of a lot of thinking: My theory is:
Intermediate & Advanced SEO | | James77
When google see's several duplicate content pages on a website, and decides to just show one version of the page, it at the same time agrigates the link juice pointing to all the duplicate pages, and ranks the 1 duplicate content page it decides to show as if all the link juice pointing to the duplicate versions were pointing to the 1 version. EG
Link X -> Duplicate Page A
Link Y -> Duplicate Page B Google decides Duplicate Page A is the one that is most important and applies the following formula to decide its rank. Link X + Link Y (Minus some dampening factor) -> Page A I came up with the idea after I seem to have reverse engineered this - IE the website I was trying to sort out for a client had this duplicate content, issue, so we decided to put unique content on Page A and Page B (not just one page like this but many). Bizarrely after about a week, all the Page A's dropped in rankings - indicating a possibility that the old link consolidation, may have been re-correctly associated with the two pages, so now Page A would only be getting Link Value X. Has anyone got any test/analysis to support or refute this??0 -
Robots.txt: Link Juice vs. Crawl Budget vs. Content 'Depth'
I run a quality vertical search engine. About 6 months ago we had a problem with our sitemaps, which resulted in most of our pages getting tossed out of Google's index. As part of the response, we put a bunch of robots.txt restrictions in place in our search results to prevent Google from crawling through pagination links and other parameter based variants of our results (sort order, etc). The idea was to 'preserve crawl budget' in order to speed the rate at which Google could get our millions of pages back in the index by focusing attention/resources on the right pages. The pages are back in the index now (and have been for a while), and the restrictions have stayed in place since that time. But, in doing a little SEOMoz reading this morning, I came to wonder whether that approach may now be harming us... http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo
Intermediate & Advanced SEO | | kurus
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions Specifically, I'm concerned that a) we're blocking the flow of link juice and that b) by preventing Google from crawling the full depth of our search results (i.e. pages >1), we may be making our site wrongfully look 'thin'. With respect to b), we've been hit by Panda and have been implementing plenty of changes to improve engagement, eliminate inadvertently low quality pages, etc, but we have yet to find 'the fix'... Thoughts? Kurus0