What Happens If a Hreflang Sitemap Doesn't Include Every Language for Missing Translated Pages?
-
As we are building a hreflang sitemap for a client, we are correctly implementing the tag across 5 different languages including English. However, the News and Events section was never translated into any of the other four languages. There are also a few pages that were translated into some but not all of the 4 languages. Is it good practice to still list out the individual non-translated pages like on a regular sitemap without a hreflang tag? Should the hreflang sitemap include the hreflang tag with pages that are missing a few language translations (when one or two language translations may be missing)?
We are uncertain if this inconsistency would create a problem and we would like some feedback before pushing the hreflang sitemap live.
-
Hi Kyle,
I would probably only include the URLs that have been translated in the hreflang sitemap, unless the English content that hasn't been translated can be assigned to one territory / language, e.g. en_us or en_gb, etc.
If the pages that haven't been translated are otherwise linked-to throughout the site and are present in a regular sitemap, they will be found. I am not 100% sure that leaving them out of the hreflang sitemap is the way to go, so I will leave this question open for other people to reply as well. However, I'd say that if you wanted English language content to serve internationally because it has not been translated, it would be a mistake to target it to one English-language speaking area via hreflang.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why isn't our complete meta title showing up in the Google SERPS? (cut off half way)
We carry a product line, cutless bearings (for use on boats). For instance, we have one, called the Able, that has the following meta title (and searched by View Page Source to confirm): BOOT 1-3/8" x 2-3/8" x 5-1/2" Johnson Cutless Bearing | BOOT Cutlass However, if I search for it on on Google by part number or name (boot cutless bearing, boot cutlass bearing), the meta title comes back with whole first part chopped off, only showing this : "x 5-1/2" Johnson Cutless Bearing | BOOT Cutlass - Citimarine ..." Any idea why? Here's the url if it will hopefully help: https://citimarinestore.com/en/metallic-inches/156-boot-johnson-cutless-bearing-870352103.html All the products in the category are doing the same. Thanks!
Intermediate & Advanced SEO | | Citimarine0 -
Why doesn't my website crawl by Google?
Hi mozzers and members, I am having issues, why my website: http://profilecosmeticsurgery.com/ crawl by Google? let me share more clearly when this starts happening. A month or around 45 days back our website is being indexed and crawled quite well without any issues with having .html extension pages with static built website.
Intermediate & Advanced SEO | | SEOOOOOoooooooo
We finally thought to change to .php version and make whole website and its pages to be treated dynamically.
Once we changed all changes, thereafter this issues started. It has been more than 45 days, our website isn't being crawled since then. I didn't know what are the things preventing this to? Please help. Thanks in Advance Capture1.PNG0 -
Can't generate a sitemap with all my pages
I am trying to generate a site map for my site nationalcurrencyvalues.com but all the tools I have tried don't get all my 70000 html pages... I have found that the one at check-domains.com crawls all my pages but when it writes the xml file most of them are gone... seemingly randomly. I have used this same site before and it worked without a problem. Can anyone help me understand why this is or point me to a utility that will map all of the pages? Kindly, Greg
Intermediate & Advanced SEO | | Banknotes0 -
Why isn't the canonical tag on my client's Magento site working?
The reason for this mights be obvious to the right observer, but somehow I'm not able to spot the reason why. The situation:
Intermediate & Advanced SEO | | Inevo
I'm doing an SEO-audit for a client. When I'm checking if the rel=canonical tag is in place correctly, it seems like it: view-source:http://quickplay.no/fotball-mal.html?limit=15) (line nr 15) Anyone seing something wrong with this canonical? When I perform a site:http://quickplay.no/ search, I find that there's many url's indexed that ought to have been picked up by the canonical-tag: (see picture) ..this for example view-source:http://quickplay.no/fotball-mal.html?limit=15 I really can't see why this page is getting indexed, when the canonical-tag is in place. Anybody who can? Sincerely 🙂 GMdWg0K0 -
Does Google Read URL's if they include a # tag? Re: SEO Value of Clean Url's
An ECWID rep stated in regards to an inquiry about how the ECWID url's are not customizable, that "an important thing is that it doesn't matter what these URLs look like, because search engines don't read anything after that # in URLs. " Example http://www.runningboards4less.com/general-motors#!/Classic-Pro-Series-Extruded-2/p/28043025/category=6593891 Basically all of this: #!/Classic-Pro-Series-Extruded-2/p/28043025/category=6593891 That is a snippet out of a conversation where ECWID said that dirty urls don't matter beyond a hashtag... Is that true? I haven't found any rule that Google or other search engines (Google is really the most important) don't index, read, or place value on the part of the url after a # tag.
Intermediate & Advanced SEO | | Atlanta-SMO0 -
Google isn't seeing the content but it is still indexing the webpage
When I fetch my website page using GWT this is what I receive. HTTP/1.1 301 Moved Permanently
Intermediate & Advanced SEO | | jacobfy
X-Pantheon-Styx-Hostname: styx1560bba9.chios.panth.io
server: nginx
content-type: text/html
location: https://www.inscopix.com/
x-pantheon-endpoint: 4ac0249e-9a7a-4fd6-81fc-a7170812c4d6
Cache-Control: public, max-age=86400
Content-Length: 0
Accept-Ranges: bytes
Date: Fri, 14 Mar 2014 16:29:38 GMT
X-Varnish: 2640682369 2640432361
Age: 326
Via: 1.1 varnish
Connection: keep-alive What I used to get is this: HTTP/1.1 200 OK
Date: Thu, 11 Apr 2013 16:00:24 GMT
Server: Apache/2.2.23 (Amazon)
X-Powered-By: PHP/5.3.18
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Last-Modified: Thu, 11 Apr 2013 16:00:24 +0000
Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0
ETag: "1365696024"
Content-Language: en
Link: ; rel="canonical",; rel="shortlink"
X-Generator: Drupal 7 (http://drupal.org)
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8 xmlns:content="http://purl.org/rss/1.0/modules/content/"
xmlns:dc="http://purl.org/dc/terms/"
xmlns:foaf="http://xmlns.com/foaf/0.1/"
xmlns:og="http://ogp.me/ns#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:sioc="http://rdfs.org/sioc/ns#"
xmlns:sioct="http://rdfs.org/sioc/types#"
xmlns:skos="http://www.w3.org/2004/02/skos/core#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#"> <title>Inscopix | In vivo rodent brain imaging</title>0 -
My warning report says I have too many on page links - 517! I can't find 50% of them but my q is about no follow
if we put 'no follow' on some of these links does that mean the search engines won't index the no follow pages even if those pages are linked to from elsewhere? no link juice will flow from the page with the (no follow) links on? Just trying to understand why my rankings have dropped so dramatically in the last 6 weeks or so since we redesigned the site, and it might be that now we have too many links on the homepage. This is the page http://www.suffolktouristguide.com/ All suggestions appreciated!
Intermediate & Advanced SEO | | SarahinSuffolk0 -
Most Painless way of getting Duff Pages out of SE's Index
Hi, I've had a few issues that have been caused by our developers on our website. Basically we have a pretty complex method of automatically generating URL's and web pages on our website, and they have stuffed up the URL's at some point and managed to get 10's of thousands of duff URL's and pages indexed by the search engines. I've now got to get these pages out of the SE's indexes as painlessly as possible as I think they are causing a Panda penalty. All these URL's have an addition directory level in them called "home" which should not be there, so I have: www.mysite.com/home/page123 instead of the correct URL www.mysite.com/page123 All these are totally duff URL's with no links going to them, so I'm gaining nothing by 301 redirects, so I was wondering if there was a more painless less risky way of getting them all out the indexes (IE after the stuff up by our developers in the first place I'm wary of letting them loose on 301 redirects incase they cause another issue!) Thanks
Intermediate & Advanced SEO | | James770