Does hreflang restrain my site from being penalized for duplicated content?
-
I am curently setting up a travel agency website. This site is going to be targeting both american and mexican costumers. I will be working with an /es subdirectory. Would hreflang, besides showing the matching language version in the SERP´s, restrain my site translated content (wich is pretty much the same) from being penalized fro duplicated content? Do I have to implement relcannonical?
Thank ypu in advanced for any help you can provide.
-
Hi!
Hreflang (plus geotargeting domain to US and /es subfolder to Mexico on Google Webmaster Tools > Settings), will make that your US based users will see the US targeting URLs in Google.com, and the Mexican will see the ones targeting them in Google.com.mx.
Regarding the content that will not be translated in Spanish, but still published in the /es subfolder, yes, you will have to canonicalize the duplicated content under /es toward its corresponding "American" URL.
Hreflang, in fact, is not meant to solve duplicated content issues, just visualization in SERPs. For that reason, as Google itself finally explained, the rel="canonical" must be used.
Said that, don't worry! Even if the canonical is implemented, the hreflang will still making Google to show the /es URL of the not translated pages.
On the other hand, though, it is not wise to not localize everything in Spanish, also because - let's be honest - who in Mexico is using English for searching in Google.com.mx?
P.S.: I was rereading your question before clicking "Post Response", and now I have a doubt. When you write:
Would hreflang, besides showing the matching language version in the SERP´s, restrain my site translated content (wich is pretty much the same) from being penalized fro duplicated content?
Do you mean: "The content is translated but - well - it's the same as the English". If that is the case (but I don't think this interpretation is correct), then you don't have to implement any canonical (if not for others reasons), because the content is the "same", but in another languages, hence different.
-
You're going to get a variety of responses on this but in my experience you should:
a) implement rel canonical.
b) not expect to be "penalised" for duplicate content
I had the same site up on two domains for years with .com and .com.au - duplicate content, both ranking well. We have other clients doing the same on .com, .co.uk and .com.au - all with the exact same site and all ranking 40/50+.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to optimize WordPress Pages with Duplicate Page Content?
I found the non WWW ans WWW duplicate pages URL only, more than thousand pages.
On-Page Optimization | | eigital0 -
Site Structure. Which is better?
Ideally, which model is better for site structure: 1. Homepage -> Categories -> Individual Pages (See example here http://www.wordtracker.com/attachments/bead-site-structure.gif) OR 2. Homepage -> Categories -> Sub-categories -> Indicidual Pages In the 2nd model, are the individual pages too far away from the homepage?
On-Page Optimization | | brianflannery0 -
Duplicate Content
Is making tabs with general product information on similar products considered duplicate content?
On-Page Optimization | | BridalHotspot0 -
Duplication issue on my website
hi I have a cms website with 2000 pages.my problem is that 1. www.test.com/abc.html 2. www.test.com/abc.html?gallery?123testing it showing duplication page in me seomoz error list. It is a single page. Please suggest solution for it
On-Page Optimization | | wmsindia0 -
Does schema.org assist with duplicate content concerns
The issue of duplicate content has been well documented and there are lots of articles suggesting to noindex archive pages in WordPress powered sites. Schema.org allows us to mark-up our content, including marking a components URL. So my question simply, is no-indexing archive (category/tag) pages still relevant when considering duplicate content? These pages are in essence a list of articles, which can be marked as an article or blog posting, with the url of the main article and all the other cool stuff the scheme gives us. Surely Google et al are smart enough to recognise these article listings as gateways to the main content, therefore removing duplicate content concerns. Of course, whether or not doing this is a good idea will be subjective and based on individual circumstances - I'm just interested in whether or not the search engines can handle this appropriately.
On-Page Optimization | | MarkCA0 -
How could I avoid the "Duplicate Page Content" issue on the search result pages of a webshop site?
My webshop site was just crawled by Roger, and it found 683 "Duplicate Page Content" issues. Most of them are result pages of different product searches, that are not really identical, but very similar to each other. Do I have to worry about this? If yes, how could I make the search result pages different? IS there any solution for this? Thanks: Zoltan
On-Page Optimization | | csajbokz0 -
Duplicate pages
Hi, I am using a CMS that generates dynamic urls that according to the SeoMoz tool will be indexed as duplicate pages. The pages in questions are forms, blog-posts etc. that are not crucial to achieve ranking for. I do worry though about the consequences of having 20 (non-duplicate)pages with static urls and about 100 pages that are duplicates with dynamic urls. What consequences will this have for the speed that the robots crawl the site and could there be negative effects on ranking for the entire domain?
On-Page Optimization | | vibelingo0 -
Avoiding "Duplicate Page Title" and "Duplicate Page Content" - Best Practices?
We have a website with a searchable database of recipes. You can search the database using an online form with dropdown options for: Course (starter, main, salad, etc)
On-Page Optimization | | smaavie
Cooking Method (fry, bake, boil, steam, etc)
Preparation Time (Under 30 min, 30min to 1 hour, Over 1 hour) Here are some examples of how URLs may look when searching for a recipe: find-a-recipe.php?course=starter
find-a-recipe.php?course=main&preperation-time=30min+to+1+hour
find-a-recipe.php?cooking-method=fry&preperation-time=over+1+hour There is also pagination of search results, so the URL could also have the variable "start", e.g. find-a-recipe.php?course=salad&start=30 There can be any combination of these variables, meaning there are hundreds of possible search results URL variations. This all works well on the site, however it gives multiple "Duplicate Page Title" and "Duplicate Page Content" errors when crawled by SEOmoz. I've seached online and found several possible solutions for this, such as: Setting canonical tag Adding these URL variables to Google Webmasters to tell Google to ignore them Change the Title tag in the head dynamically based on what URL variables are present However I am not sure which of these would be best. As far as I can tell the canonical tag should be used when you have the same page available at two seperate URLs, but this isn't the case here as the search results are always different. Adding these URL variables to Google webmasters won't fix the problem in other search engines, and will presumably continue to get these errors in our SEOmoz crawl reports. Changing the title tag each time can lead to very long title tags, and it doesn't address the problem of duplicate page content. I had hoped there would be a standard solution for problems like this, as I imagine others will have come across this before, but I cannot find the ideal solution. Any help would be much appreciated. Kind Regards5