Canonical vs Alternate for country based subdomain dupe content?
-
What's the correct method for tagging dupe content between country based subdomains?
We have:
mydomain.com // default, en-us
www.mydomain.com // en-us
uk.mydomain.com // uk, en-gb
au.mydomain.com // australia, en-au
eu.mydomain.com // europe, en-eu
In the header of each we currently have rel="alternate" tags but we're still getting dupe content warnings in Moz for the "WWW" subdomain.
Question 1) Are we headed in the right direction with using alternate? Or would it be better to use canonical since the languages are technically all English, just different regions. The content is pretty much the same minus currency and localization differences.
Question 2) How can we solve the dupe content between WWW and the base domain, since the above isn't working.
Thanks so much
-
Yes.
-
Thanks.
So then I am safe when including all of these on every subdomain?
I have a common header where the above is the exact same for every subdomain (all 4 are always included), which I assume is the correct way?
Also: Why doesn't Moz look at the hreflang tag? I'm very worried about just "ignoring" what the tool says... why is the top SEO tool in the world not capable of correctly detecting dupe content? I'm not sure I'm comfortable with just ignoring the check engine light, so to speak.
-
In cases like yours, using the hreflang is the correct way to handle the duplicate content issue, because of the characteristics you yourself cite: currency and localization, which may be tiny differences in terms of "content" but huge in terms of usability and making completely different a product page from another.
Remember that if you canonicalize all the "duplicate" toward the canonical, the canonicalized URLs won't be shown in the countries you're targeting with those URLs... so screwing up the international SEO strategy 100%, so each URL must have as canonical its own URL (self referential), apart the obvious canonicalization rules being applied (e.g.: url with parameter canonicalized to url without parameter).
In case the URL is canonicalized for whatever reason, remember to indicate the canonical URLs in the href of the hreflang annotations. On the contrary Google will start alerting of no-return URLs errors.
Regarding the Moz Pro crawler... don't pay attention to it, because it doesn't consider the hreflang annotation,therefore it will continue saying that those pages are duplicate.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Canonicals
We have a client that has his products listed on 20+ different websites, including 4 of his own. Also, he only has 1 of everything, so once he sells it then the product is gone. To battle this duplication issue, plus having a short internet lifespan of less than 4 weeks, I was wondering if it would be a good idea to canonical the products back to the category page. Kind of like using canonical tags on your "used blue widget" and "used red widget" pages back to the "used widgets" page. Would this help with the duplicate content issues? Is this a proper use of a canonical?
Technical SEO | | WhoWuddaThunk0 -
Canonical warnings
[1] My site development tool (XSP) has recently added the canonical reference as an auto-generated tag, so every page of my site now has it. Why is SEOmoz warning me that I have hundreds of pages of canonicals if it's supposed to be a GOOD thing? [2] Google is still seeing the pages without the canonical tag because that's how they were indexed. Will they eventually get purged from their index, or should I be proactive about that, and if so, how? Thanks for any input.
Technical SEO | | PatioLifeStyle0 -
Rel=Canonical, WWW vs non WWW and SEO
Okay so I'm a bit of a loss here. For what ever reason just about every single Wordpress site I has will turn www.mysite.com into mysite.com in the browser bar. I assume this is the rel=canonical tag at work, there are no 301s on my site. When I use the Open Site Explorer and type in www.mysite.com it shows a domain authority of around 40 and a few hundred backlinks... and then I get the message. Oh Hey! It looks like that URL redirects to XXXXXX. Would you like to see data for <a class="clickable redirects">that URL instead</a>? So if I click to see this data instead I have less than half of that domain authority and about 2 backlinks. *** Does this make a difference SEO wise? Should my non WWW be redirecting to my WWW instead because that's where the domain authority and backlinks are? Why am I getting two different domain authority and backlink counts if they are essentially the same? Or am I wrong and all that link juice and authority passes just the same?
Technical SEO | | twilightofidols0 -
Duplicate Page Content
I've got several pages of similar products that google has listed as duplicate content. I have them all set up with rel="prev" and rel="next tags telling google that they are part of a group but they've still got them listed as duplicates. Is there something else I should do for these pages or is that just a short falling of googles webmaster tools? One of the pages: http://www.jaaronwoodcountertops.com/wood-countertop-gallery/walnut-countertop-9.html
Technical SEO | | JAARON0 -
301 or Rel=canonical
Should I use a 301 redirect for redirect mywebsite.com to www.mywebsite.com or use a rel=canonical?? Thanks!
Technical SEO | | LeslieVS0 -
Subdomains
Hi, I have recently started working in-house for a company and one site development was started and completed just as I joined. A new area of the site has been developed, but the developers have developed this new section in php, which cannot be hosted on the windows server the site is running on (they tell me, is this correct?) They want to add the new section as a subdomain - http://newarea.example.co.uk/ whereas I would have preferred the section added as a new subfolder. I plan to ensure that future developments to not have this problem, but is the best solution to work with the subdomain (in this instance it may not be too bad as it is a niche area of the site), or can I redirect the pages hosted on the sub-domain to a subfolder, and is this recommended? Thanks for your time.
Technical SEO | | LSLPS0 -
Subdomain Robots.txt
If I have a subdomain (a blog) that is having tags and categories indexed when they should not be, because they are creating duplicate content. Can I block them using a robots.txt file? Can I/do I need to have a separate robots file for my subdomain? If so, how would I format it? Do I need to specify that it is a subdomain robots file, or will the search engines automatically pick this up? Thanks!
Technical SEO | | JohnECF0