Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Japanese URL-structured sitemap (pages) not being indexed by Bing Webmaster Tools
-
Hello everyone,
I am facing an issue with the sitemap submission feature in Bing Webmaster Tools for a Japanese language subdirectory domain project. Just to outline the key points:
-
The website is based on a subdirectory URL ( example.com/ja/ )
-
The Japanese URLs (when pages are published in WordPress) are not being encoded. They are entered in pure Kanji.
-
Google Webmaster Tools, for instance, has no issues reading and indexing the page's URLs in its sitemap submission area (all pages are being indexed).
When it comes to Bing Webmaster Tools it's a different story, though. Basically, after the sitemap has been submitted ( example.com/ja/sitemap.xml ), it does report an error that it failed to download this part of the sitemap: "page-sitemap.xml" (basically the sitemap featuring all the sites pages). That means that no URLs have been submitted to Bing either.
My apprehension is that Bing Webmaster Tools does not understand the Japanese URLs (or the Kanji for that matter). Therefore, I generally wonder what the correct way is to go on about this.
When viewing the sitemap ( example.com/ja/page-sitemap.xml ) in a web browser, though, the Japanese URL's characters are already displayed as encoded.
I am not sure if submitting the Kanji style URLs separately is a solution. In Bing Webmaster Tools this can only be done on the root domain level ( example.com ). However, surely there must be a way to make Bing's sitemap submission understand Japanese style sitemaps?
Many thanks everyone for any advice!
-
-
Hello there,
Thanks for your suggestions and sorry for the late response. In fact, I also left an inquiry with the Bing Webmaster Tools mail support (I did not even realise they offered this service), and they answered within one day.
They confirmed that the site runs without any errors and that the sitemap has now been submitted successfully. Upon checking I can confirm this (the sitemaps URLs have finally been submitted). Therefore, all is in order now.
I still do not understand why prior to this the JA sitemap URLs were not being submitted (for weeks), even though I tried to make Bing Webmaster Tools re-crawl it by re-submitting the sitemap.
In any case, I guess this is one of these episodes where the problem simply fixed itself. Kudos to their support though...
Thanks everyone
-
Hey there–a few thoughts/questions:
- have you correctly implemented hreflang tags (tags that display the alternate language & country versions in the section of every page of your site)?
- why did you choose to create a separate sitemap that lives under the /ja page path? you could, instead, add alternate URLs to the JP version of your content in your existing sitemap
- I doubt this is why you're seeing issues, but is there a particular reason you chose JA as the page path as opposed to the HTML ISO country code for Japan, JP?
To specifically answer your Q about Kanji, I have not found anything that states Bing does not support Kanji. After some preliminary searching, it also looks like Bing does present URLs with Kanji characters in its results (example). As a result, I don't think Kanji is the reason you're having trouble getting your JP sitemap read by Bing.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
SEO - New URL structure
Hi, Currently we have the following url structure for all pages, regardless of the hierarchy: domain.co.uk/page, such as domain/blog name. Can you, please confirm the following: 1. What is the benefit of organising the pages as a hierarchy, i.e. domain/features/feature-name or domain/industries/industry-name or domain/blog/blog name etc. 2. This will create too many 301s - what is Google's tolerance of redirects? Is it worth for us changing the url structure or would you only recommend to add breadcrumbs? Many thanks Katarina
Technical SEO | | Katarina-Borovska1 -
Tool to Generate All the URLs on a Domain
Hi all, I've been using xml-sitemaps.com for a while to generate a list of all the URLs that exist on a domain. However, this tool only works for websites with under 500 URLs on a domain. The paid tool doesn't offer what we are looking for either. I'm hoping someone can help with a recommendation. We're looking for a tool that can: Crawl, and list, all the indexed URLs on a domain, including .pdf and .doc files (ideally in a .xls or .txt file) Crawl multiple domains with unlimited URLs (we have 5 websites with 500+ URLs on them) Seems pretty simple, but we haven't been able to find something that isn't tailored toward management of a single domain or that can crawl a huge volume of content.
Technical SEO | | timfrick0 -
Flat vs Hierarchical URL Structure
Hi, We are redoing our site structure and I was wondering what are the benefits of having a flat url structure. For example store.com/product instead of doing store.com/category/product. I noticed sites doing it both ways, even moz.com has both structures ex: moz.com/learn/seo and when you clck on something it brings you to moz.com/seo-expert-quiz (even though following the previous logic it should be moz.com/learn/seo/seo-expert-quiz) Please advise, Thanks!
Technical SEO | | WSteven0 -
Investigating a huge spike in indexed pages
I've noticed an enormous spike in pages indexed through WMT in the last week. Now I know WMT can be a bit (OK, a lot) off base in its reporting but this was pretty hard to explain. See, we're in the middle of a huge campaign against dupe content and we've put a number of measures in place to fight it. For example: Implemented a strong canonicalization effort NOINDEX'd content we know to be duplicate programatically Are currently fixing true duplicate content issues through rewriting titles, desc etc. So I was pretty surprised to see the blow-up. Any ideas as to what else might cause such a counter intuitive trend? Has anyone else see Google do something that suddenly gloms onto a bunch of phantom pages?
Technical SEO | | farbeseo0 -
Should all pagination pages be included in sitemaps
How important is it for a sitemap to include all individual urls for the paginated content. Assuming the rel next and prev tags are set up would it be ok to just have the page 1 in the sitemap ?
Technical SEO | | Saijo.George0 -
Can you noindex a page, but still index an image on that page?
If a blog is centered around visual images, and we have specific pages with high quality content that we plan to index and drive our traffic, but we have many pages with our images...what is the best way to go about getting these images indexed? We want to noindex all the pages with just images because they are thin content... Can you noindex,follow a page, but still index the images on that page? Please explain how to go about this concept.....
Technical SEO | | WebServiceConsulting.com0 -
Landing Page URL Structure
We are finally setting up landing pages to support our PPC campaigns. There has been some debate internally about the URL structure. Originally we were planning on URL's like: domain.com /california /florida /ny I would prefer to have the URL's for each state inside a "state" folder like: domain.com /state /california /florida /ny I like having the folders and pages for each state under a parent folder to keep the root folder as clean as possible. Having a folder or file for each state in the root will be very messy. Before you scream URL rewriting :-). Our current site is still running under Classic ASP which doesn't support URL rewriting. We have tried to use HeliconTech's ISAPI rewrite module for IIS but had to remove it because of too many configuration issues. Next year when our coding to MVC is complete we will use URL rewriting. So the question for now: Is there any advantage or disadvantage to one URL structure over the other?
Technical SEO | | briankb0 -
De-indexing millions of pages - would this work?
Hi all, We run an e-commerce site with a catalogue of around 5 million products. Unfortunately, we have let Googlebot crawl and index tens of millions of search URLs, the majority of which are very thin of content or duplicates of other URLs. In short: we are in deep. Our bloated Google-index is hampering our real content to rank; Googlebot does not bother crawling our real content (product pages specifically) and hammers the life out of our servers. Since having Googlebot crawl and de-index tens of millions of old URLs would probably take years (?), my plan is this: 301 redirect all old SERP URLs to a new SERP URL. If new URL should not be indexed, add meta robots noindex tag on new URL. When it is evident that Google has indexed most "high quality" new URLs, robots.txt disallow crawling of old SERP URLs. Then directory style remove all old SERP URLs in GWT URL Removal Tool This would be an example of an old URL:
Technical SEO | | TalkInThePark
www.site.com/cgi-bin/weirdapplicationname.cgi?word=bmw&what=1.2&how=2 This would be an example of a new URL:
www.site.com/search?q=bmw&category=cars&color=blue I have to specific questions: Would Google both de-index the old URL and not index the new URL after 301 redirecting the old URL to the new URL (which is noindexed) as described in point 2 above? What risks are associated with removing tens of millions of URLs directory style in GWT URL Removal Tool? I have done this before but then I removed "only" some useless 50 000 "add to cart"-URLs.Google says themselves that you should not remove duplicate/thin content this way and that using this tool tools this way "may cause problems for your site". And yes, these tens of millions of SERP URLs is a result of a faceted navigation/search function let loose all to long.
And no, we cannot wait for Googlebot to crawl all these millions of URLs in order to discover the 301. By then we would be out of business. Best regards,
TalkInThePark0