Cross-Domain Canonical and duplicate content
-
Hi Mozfans!
I'm working on seo for one of my new clients and it's a job site (i call the site: Site A).
The thing is that the client has about 3 sites with the same Jobs on it.I'm pointing a duplicate content problem, only the thing is the jobs on the other sites must stay there. So the client doesn't want to remove them. There is a other (non ranking) reason why.
Can i solve the duplicate content problem with a cross-domain canonical?
The client wants to rank well with the site i'm working on (Site A).Thanks!
Rand did a whiteboard friday about Cross-Domain Canonical
http://www.seomoz.org/blog/cross-domain-canonical-the-new-301-whiteboard-friday -
Every document I have seen all agrees that canonical tags are followed when the tag is used appropriately.
The tag could be misused either intentionally or unintentionally in which case it would not be honored. The tag is meant to connect pages which offer identical information, very similar information, or the same information presented in a different format such as a modified sort order, or a print version. I have never seen nor even heard of an instance where a properly used canonical tag was not respected by Google or Bing.
-
Thanks Ryan, I didn't noticed that about the reply sequencing, and you're right, I read them in the wrong order. It makes much more sense now.
By "some" support, I meant that even Google via Matt Cutts says that they don't take cross domain canonical as "a directive" but rather a "hint" (and even that assumes Google agrees with you, that your pages are duplicates).
So the magic question is how how much authority do Bing and Google give the rel="canonical" and is it similar between the two engines?
-
One aspect of the SEOmoz Q&A structure I dislike is the ordering of responses. Rather then maintaining a timeline order, the responses are re-ordered based on other factors such as "thumbs-up" and staff endorsements. I understand the concept that replies which are liked more are probably more helpful and should be seen first, but it causes confusion such as in this case.
Dr. Pete's response on the Bing cross-canonical topic appears first, but it was offered second-to-last chronologically speaking. We originally agreed there was not evidence indicating Bing supported the cross-canonical tag, then he located such evidence and therefore we agree Bing does support the tag.
The statement Dr. Pete shared was that "Bing does support cross-domain canonical". There was no limiting factor. I mention this because you said they offered "some" support and I am not sure why you used that qualifier.
-
Ryan, at the end o the thread you linked to, it seems like both Dr. Pete and yourself, agreed that there wasn't much evidence of bing support. Have you learned something that changed your mind?
I know a rep from Bing told Dr. Pete there was "some" support, but what does that mean? i.e. Exactly Identical sites pass a little juice/authority, or similar sites pass **a lot **juice/authority?
Take a product that has different brands in different parts of the country. Hellmanns's and Best Foods for example. They have two sites which are the same except for logos. Here is a recipe from each site.
http://www.bestfoods.com/recipe_detail.aspx?RecipeID=12497&version=1
http://www.bestfoods.com/recipe_detail.aspx?RecipeID=12497&version=1
The sites are nearly identical except for logo's/product names.
For the (very) long tail keyword "Mayonnaise Bobby Flay Waldorf salad wrap" Best Foods ranks #5 and Hellmann's ranks #11.
I doubt they have a SEO looking very close at the sites, because in addition to their duplicate content problem, neither pages has a meta description.
If the Hellmanns page had a
[http://www.bestfoods.com/recipe_detail.aspx?RecipeID=12497&version=1](http://www.bestfoods.com/recipe_detail.aspx?RecipeID=12497&version=1)"/>
I'd expect to see the Best Foods page move up and Hellmanns move down in Google. But would Bing appears to not like the duplicate pages as much, currently the Best Food version ranks #12 and the Hellmann doesn't rank at all. My own (imperfect tests) lead me to believe that adding the rel="canonical" would help in google but not bing.
Obviously, the site owner would probably like one of those two pages to rank very high for the unbranded keyword, but they would want both pages to rank well if I added a branded term. My experience with cross-domain canonical in Google lead me to believe that even the non-canonical version would rank for branded keywords in Google, but what would Bing do?
I'd be very cautious about relying on the cross-domain canonical in Bing until I see some PUBIC announcement that it's supported. ```
-
I was bit confused when i read that. You put my mind to rest !
-
My apologies Atul. I am not sure what I was thinking when I wrote that. Please disregard.
-
Thanks Ryan!
So it will be a Canonical tag
-
I would advise NOT using the robots.txt file if at all possible. In general, the robots.txt file is a means of absolute last resort. The main reason I use the robots.txt file is because I am working with a CMS or shopping cart that does not have the SEO flexibility to noindex pages. Otherwise, the best robots.txt file is a blank one.
When you block a page in robots.txt, you are not only preventing content from being indexed, but you are blocking the natural flow of page rank throughout your site. The link juice which flows to the blocked page dies on the page as crawlers cannot access it.
-
That is correct. If you choose to read the information directly from Google it can be found here:
-
Thanks!
It's for a site in the Netherlands and google is about 98% of the market. Bing is comming up so a thing to check.
No-roboting is a way to do it i didn't think about! thanks for that. I will check with the client.
-
Thanks Ryan!
So link is like:
On the site a i will use the canonical to point everything to site A.
-
You mean rel=author on site A ? How does it help ? Where should rel=author points to ?
-
According to Dr. Pete Bing does support cross-domain canonical.
If you disagreed I would first recommend using rel=author to establish "Site A" was the source of the article.
-
A cross-domain canonical will help with Google. (make sure the pages truely are duplicate or very close), however, I haven't found any confirmation yet that Bing supports Cross Domain Canonical.
If the other sites don't need to rank at all, you could also consider no-roboting the job pages on the other sites, so that your only Site A's job listings get indexed.
-
Yes. A cross-domain canonical would solve the duplicate content issue and focus on the main site's ranking.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Concerns of Duplicative Content on Purchased Site
Recently I purchased a site of 50+ DA (oldsite.com) that had been offline/404 for 9-12 months from the previous owner. The purchase included the domain and the content previously hosted on the domain. The backlink profile is 100% contextual and pristine. Upon purchasing the domain, I did the following: Rehosted the old site and content that had been down for 9-12 months on oldsite.com Allowed a week or two for indexation on oldsite.com Hosted the old content on my newsite.com and then performed 100+ contextual 301 redirects from the oldsite.com to newsite.com using direct and wild card htaccess rules Issued a Press Release declaring the acquisition of oldsite.com for newsite.com Performed a site "Change of Name" in Google from oldsite.com to newsite.com Performed a site "Site Move" in Bing/Yahoo from oldsite.com to newsite.com It's been close to a month and while organic traffic is growing gradually, it's not what I would expect from a domain with 700+ referring contextual domains. My current concern is around original attribution of content on oldsite.com shifting to scraper sites during the year or so that it was offline. For Example: Oldsite.com has full attribution prior to going offline Scraper sites scan site and repost content elsewhere (effort unsuccessful at time because google know original attribution) Oldsite.com goes offline Scraper sites continue hosting content Google loses consumer facing cache from oldsite.com (and potentially loses original attribution of content) Google reassigns original attribution to a scraper site Oldsite.com is hosted again and Google no longer remembers it's original attribution and thinks content is stolen Google then silently punished Oldsite.com and Newsite.com (which it is redirected to) QUESTIONS Does this sequence have any merit? Does Google keep track of original attribution after the content ceases to exist in Google's search cache? Are there any tools or ways to tell if you're being punished for content being posted else on the web even if you originally had attribution? Unrelated: Are there any other steps that are recommend for a Change of site as described above.
Intermediate & Advanced SEO | | PetSite0 -
No-index pages with duplicate content?
Hello, I have an e-commerce website selling about 20 000 different products. For the most used of those products, I created unique high quality content. The content has been written by a professional player that describes how and why those are useful which is of huge interest to buyers. It would cost too much to write that high quality content for 20 000 different products, but we still have to sell them. Therefore, our idea was to no-index the products that only have the same copy-paste descriptions all other websites have. Do you think it's better to do that or to just let everything indexed normally since we might get search traffic from those pages? Thanks a lot for your help!
Intermediate & Advanced SEO | | EndeR-0 -
Ecommerce: remove duplicate product pages or use rel=canonical
Say we have a white-widget that is in our white widget collection and also in our wedding widget collection. Currently, we have 3 different URLs for that product (white-widgets/white-widget and wedding-widgets/white-widget and all-widgets/white-widget).We are automatically generating a rel=canonical tag for those individual collection product pages that canonical the original product page (/all-widgets/white-widget). This guide says that is the structure Zappos uses and says "There is an elegance to this approach. However, I would re-visit it today in light of changes in the SEO world."
Intermediate & Advanced SEO | | birchlore
I noticed that Zappos, and many other shops now actually just link back to the parent product page (e.g. If I am in wedding widget section and click on the widget, I go to all-products/white-widget instead of wedding-widgets/white-widget).So my question is:Should we even have these individual product URLs or just get rid of them altogether? My original thought was that it would help SEO for search term "white wedding widget" to have a product URL wedding-widget/white-widget but we won't even be taking advantage of that by using rel=canonical anyway.0 -
Moving some content to a new domain - best practices to avoid duplicate content?
Hi We are setting up a new domain to focus on a specific product and want to use some of the content from the original domain on the new site and remove it from the original. The content is appropriate for the new domain and will be irrelevant for the original domain and we want to avoid creating completely new content. There will be a link between the two domains. What is the best practice for this to avoid duplicate content and a potential Panda penalty?
Intermediate & Advanced SEO | | Citybase0 -
How to move domain content w Penguin Penalty?
Hey guys, I've come to the conclusion the sheer amount of crap links a site of ours has is un repairable. We own a .net version with the same brand name so I'm planning to move our ecommerce store over with all its content. I can move the site in one swoop but I believe Google will see it as duplicate content if we don't allow the old site to de index first. I would simply take it down for a month but we still get some orders now and then. Anyone have any ideas? I was thinking of leaving an image up on each page that is no index no follow linked to the new site that explains the site is being moved, etc.
Intermediate & Advanced SEO | | iAnalyst.com1 -
Diagnosing duplicate content issues
We recently made some updates to our site, one of which involved launching a bunch of new pages. Shortly afterwards we saw a significant drop in organic traffic. Some of the new pages list similar content as previously existed on our site, but in different orders. So our question is, what's the best way to diagnose whether this was the cause of our ranking drop? My current thought is to block the new directories via robots.txt for a couple days and see if traffic improves. Is this a good approach? Any other suggestions?
Intermediate & Advanced SEO | | jamesti0 -
IP address being indexed by Google in addition to canonical domain.
Our site's IP address is being indexed in addition to the canonical www.example.com domain. As soon as it was flagged a 301 was implemented in the .htaccess file to redirect the IP address to the canonical. Does this usually occur? Is it detrimental to SEO? In my time in SEO I've never heard of this being an issue, or being part of a list of things to be checked. It sounds more like a server that wasn't configured correctly when hosting was set up? It didn't seem to be affecting the site at all, but is it more common and I've just never heard of it? 😛 Should it be something I'm usually looking for in future? Responses are greatly appreciated!
Intermediate & Advanced SEO | | mikeimrie0 -
Duplicate page Content
There has been over 300 pages on our clients site with duplicate page content. Before we embark on a programming solution to this with canonical tags, our developers are requesting the list of originating sites/links/sources for these odd URLs. How can we find a list of the originating URLs? If you we can provide a list of originating sources, that would be helpful. For example, our the following pages are showing (as a sample) as duplicate content: www.crittenton.com/Video/View.aspx?id=87&VideoID=11 www.crittenton.com/Video/View.aspx?id=87&VideoID=12 www.crittenton.com/Video/View.aspx?id=87&VideoID=15 www.crittenton.com/Video/View.aspx?id=87&VideoID=2 "How did you get all those duplicate urls? I have tried to google the "contact us", "news", "video" pages. I didn't get all those duplicate pages. The page id=87 on the most of the duplicate pages are not supposed to be there. I was wondering how the visitors got to all those duplicate pages. Please advise." Note, the CMS does not create this type of hybrid URLs. We are as curious as you as to where/why/how these are being created. Thanks.
Intermediate & Advanced SEO | | dlemieux0