301s vs. rel=canonical for duplicate content across domains
-
Howdy mozzers,
I just took on a telecommunications client who has spent the last few years acquiring smaller communications companies. When they took over these companies, they simply duplicated their site at all the old domains, resulting in a bunch of sites across the web with the exact same content. Obviously I'd like them all 301'd to their main site, but I'm getting push back.
Am I OK to simply plug in rel=canonical tags across the duplicate sites? All the content is literally exactly the same.
Thanks as always
-
Awesome - thanks Matthew.
-
Thanks Ade. This is really helpful. Much appreciated!
-
Hey,
You can do cross domain canonicals. Google does support that (http://www.youtube.com/watch?v=zI6L2N4A0hA) and the one time I had to use that, it did seem to help. That being said, I'm not sure if Bing supports that or not.
Hope that helps.
-
Hi James,
Using a rel=canonical will work but I am guessing that at some point your client will want to updated the content of their website. If you do go for rel=canonical then this would mean that you would have to also update that same content to all of the other domains.
Depending on your domain/website hosting set-up you may be able to get around this by adding all of the duplicate domains as Parked Domains on the website hosting server but this can get very messy.
Unless there is a really good reason not to then I would use 301's.
If any of the domains has been hit with the Penguin update then I wouldn't use a 301 for those domains.
Cheers.
Ade.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do I avoid this issue of duplicate content with Google?
I have an ecommerce website which sells a product that has many different variations based on a vehicle’s make, model, and year. Currently, we sell this product on one page “www.cargoliner.com/products.php?did=10001” and we show a modal to sort through each make, model, and year. This is important because based on the make, model, and year, we have different prices/configurations for each. For example, for the Jeep Wrangler and Jeep Cherokee, we might have different products: Ultimate Pet Liner - Jeep Wrangler 2011-2013 - $350 Ultimate Pet Liner - Jeep Wrangler 2014 - 2015 - $350 Utlimate Pet Liner - Jeep Cherokee 2011-2015 - $400 Although the typical consumer might think we have 1 product (the Ultimate Pet Liner), we look at these as many different types of products, each with a different configuration and different variants. We do NOT have unique content for each make, model, and year. We have the same content and images for each. When the customer selects their make, model, and year, we just search and replace the text to make it look like the make, model, and year. For example, when a custom selects 2015 Jeep Wrangler from the modal, we do a search and replace so the page will have the same url (www.cargoliner.com/products.php?did=10001) but the product title will say “2015 Jeep Wrangler”. Here’s my problem: We want all of these individual products to have their own unique urls (cargoliner.com/products/2015-jeep-wrangler) so we can reference them in emails to customers and ideally we start creating unique content for them. Our only problem is that there will be hundreds of them and they don’t have unique content other than us switching in the product title and change of variants. Also, we don’t want our url www.cargoliner.com/products.php?did=10001 to lose its link juice. Here’s my question(s): My assumption is that I should just keep my url: www.cargoliner.com/products.php?did=10001 and be able to sort through the products on that page. Then I should go ahead and make individual urls for each of these products (i.e. cargoliner.com/products/2015-jeep-wrangler) but just add a “nofollow noindex” to the page. Is this what I should do? How secure is a “no-follow noindex” on a webpage? Does Google still index? Am I at risk for duplicate content penalties? Thanks!
Technical SEO | | kirbyfike0 -
Who gets punished for duplicate content?
What happens if two domains have duplicate content? Do both domains get punished for it, or just one? If so, which one?
Technical SEO | | Tobii-Dynavox0 -
Using canonical for duplicate contents outside of my domain
I have 2 domains for the same company, example.com and example.sg Sometimes we have to post the same content or event on both websites so to protect my website from duplicate content plenty i use canonical tag to point to either .com or .sg depend on the page. Any idea if this is the right decision Thanks
Technical SEO | | MohammadSabbagh0 -
How different does content need to be to avoid a duplicate content penalty?
I'm implementing landing pages that are optimized for specific keywords. Some of them are substantially the same as another page (perhaps 10-15 words different). Are the landing pages likely to be identified by search engines as duplicate content? How different do two pages need to be to avoid the duplicate penalty?
Technical SEO | | WayneBlankenbeckler0 -
Duplicate Content
Hi, we need some help on resolving this duplicate content issue,. We have redirected both domains to this magento website. I guess now Google considered this as duplicate content. Our client wants both domain name to go to the same magento store. What is the safe way of letting Google know these are same company? Or this is not ideal to do this? thanks
Technical SEO | | solution.advisor0 -
Is 100% duplicate content always duplicate?
Bit of a strange question here that would be keen on getting the opinions of others on. Let's say we have a web page which is 1000 lines line, pulling content from 5 websites (the content itself is duplicate, say rss headlines, for example). Obviously any content on it's own will be viewed by Google as being duplicate and so will suffer for it. However, given one of the ways duplicate content is considered is a page being x% the same as another page, be it your own site or someone elses. In the case of our duplicate page, while 100% of the content is duplicate, the page is no more than 20% identical to another page so would it technically be picked up as duplicate. Hope that makes sense? My reason for asking is I want to pull latest tweets, news and rss from leading sites onto a site I am developing. Obviously the site will have it's own content too but also want to pull in external.
Technical SEO | | Grumpy_Carl0 -
Getting rid of duplicate content with rel=canonical
This may sound like a stupid question, however it's important that I get this 100% straight. A new client has nearly 6k duplicate page titles / descriptions. To cut a long story short, this is mostly the same page (or rather a set of pages), however every time Google visits these pages they get a different URL. Hence the astronomical number of duplicate page titles and descriptions. Now the easiest way to fix this looks like canonical linking. However, I want to be absolutely 100% sure that Google will then recognise that there is no duplicate content on the site. Ideally I'd like to 301 but the developers say this isn't possible, so I'm really hoping the canonical will do the job. Thanks.
Technical SEO | | RiceMedia0 -
Canonical Link for Duplicate Content
A client of ours uses some unique keyword tracking for their landing pages where they append certain metrics in a query string, and pulls that information out dynamically to learn more about their traffic (kind of like Google's UTM tracking). Non-the-less these query strings are now being indexed as separate pages in Google and Yahoo and are being flagged as duplicate content/title tags by the SEOmoz tools. For example: Base Page: www.domain.com/page.html
Technical SEO | | kchandler
Tracking: www.domain.com/page.html?keyword=keyword#source=source Now both of these are being indexed even though it is only one page. So i suggested placing an canonical link tag in the header point back to the base page to start discrediting the tracking URLs: But this means that the base pages will be pointing to themselves as well, would that be an issue? Is their a better way to solve this issue without removing the query tracking all togther? Thanks - Kyle Chandler0