Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Google Not Indexing XML Sitemap Images
-
Hi Mozzers,
We are having an issue with our XML sitemap images not being indexed.
The site has over 39,000 pages and 17,500 images submitted in GWT. If you take a look at the attached screenshot, 'GWT Images - Not Indexed', you can see that the majority of the pages are being indexed - but none of the images are.
The first thing you should know about the images is that they are hosted on a content delivery network (CDN), rather than on the site itself. However, Google advice suggests hosting on a CDN is fine - see second screenshot, 'Google CDN Advice'. That advice says to either (i) ensure the hosting site is verified in GWT or (ii) submit in robots.txt. As we can't verify the hosting site in GWT, we had opted to submit via robots.txt.
There are 3 sitemap indexes: 1) http://www.greenplantswap.co.uk/sitemap_index.xml, 2) http://www.greenplantswap.co.uk/sitemap/plant_genera/listings.xml and 3) http://www.greenplantswap.co.uk/sitemap/plant_genera/plants.xml.
Each sitemap index is split up into often hundreds or thousands of smaller XML sitemaps. This is necessary due to the size of the site and how we have decided to pull URLs in. Essentially, if we did it another way, it may have involved some of the sitemaps being massive and thus taking upwards of a minute to load.
To give you an idea of what is being submitted to Google in one of the sitemaps, please see view-source:http://www.greenplantswap.co.uk/sitemap/plant_genera/4/listings.xml?page=1.
Originally, the images were SSL, so we decided to reverted to non-SSL URLs as that was an easy change. But over a week later, that seems to have had no impact. The image URLs are ugly... but should this prevent them from being indexed?
The strange thing is that a very small number of images have been indexed - see http://goo.gl/P8GMn. I don't know if this is an anomaly or whether it suggests no issue with how the images have been set up - thus, there may be another issue.
Sorry for the long message but I would be extremely grateful for any insight into this. I have tried to offer as much information as I can, however please do let me know if this is not enough.
Thank you for taking the time to read and help.
Regards,
Mark
-
Hi Mark,
I'm just following the thread as I have a similar problem. Would you mind sharing your results from the tests?
Thanks,
Bogdan -
Thanks Everett - that's exactly what I intend to do.
We will be testing two new sitemaps with 100 x URLs each. 1. With just the file extension removed and 2. With the entire cropping part of the URL removed, as suggested by Matt.
Will be interested to see whether just one or both of the sitemaps are successful. Will of course post the outcome here, for anyone who might have this problem in future.
-
It isn't always that simple. Maybe commas don't present a problem on their own. Maybe double file extensions don't present a problem on their own. Maybe a CDN doesn't present a problem on its own. Maybe very long, complicated URLs don't present a problem on their own.
You have all of these. Together, in any combination, they could make indexation of your images a problem for Google.
Just test it out on a few. Get rid of the file extension. If that doesn't work, get rid of the comma. That is all you can do. Start with whatever is easiest for the developer to implement, and test it out on a few before rolling it out across all of your images.
-
Cheers for that mate - especially the useful Excel formula.
I am going to try a few things in isolation so that we can accurately say which element/s caused the issue.
Thanks again, mate.
-
Ignore the developer - what worked for one doesn't mean it'll work for you
The easiest way to test this is to manually create a sitemap with 100 or so 'clean' image URLs. Just pull the messy ones into excel and use the formula below to create a clean version (Use A1 for messy, B1 for formula).
Good luck mate.
=CONCATENATE("image:imageimage:lochttp://res.cloudinary.com/greenplantswap/image/upload/",RIGHT(A1,LEN(A1)-(FIND("",(SUBSTITUTE(A1,"/","",(IF(LEN(TRIM(A1))=0,0,LEN(TRIM(A1))-LEN(SUBSTITUTE(A1,"/",""))))))))),"</image:loc></image:image>")
-
Thanks for the responses guys, much appreciated.
In terms of the commas, that was something that I put to the developer, however he was able to come back with examples where this has clearly not been an issue - e.g. apartable.com have commas in their URLs and use the same CDN (Coudinary).
However, I agree with you that double file extension could be the issue. I may have to wait until next week to find out as the developer is working on another project, but will post the outcome here once I know.
Thank you again for the help!
-
Hello Edlondon,
I think you're probably answering your own question here. Google typically doesn't have any problem indexing images served from a CDN. However, I've seen Google have problems with commas in the URL at times. Typically it happens when other elements in the URL are also troublesome, such as your double file extension.
Are you able to rename the files to get rid of the superfluous .jpg extension? If so, I'd recommend trying it out on a few dozen images. We could come up with a lot of hypothesis, but that would be the one I'd test first.
-
Hmmm I step off here, never used cloudinary.com or even heard of them. I personally use NetDNA, with pull zones (which means that they load the image/css/js from your origin and store a version on their servers) while handling cropping/resizing from my own end (via PHP and then loading that image, example: http://cdn.fulltraffic.net/blog/thumb/58x58/youtube-video-xQmQeKU25zg.jpg try changing the 58x58 to another size and my server will handle the crop/resize while NetDNA will serve it and store for future loads).
-
Found one of the sites with the same Cloudinary URLs with commas - apartable.com
See Google image results: https://www.google.co.uk/search?q=site:apartable.com&tbm=isch
Their images appear to be well indexed. One thing I have noticed, however, is that we often have .jpg twice in the image URL. E.g.:
- http://res.cloudinary.com/greenplantswap/image/upload/c_crop,g_north,h_0.9,w_1.0/c_fill,d_no_image_icon-720x720**.jpg**,g_center,h_900,q_80,w_900/v1352574983/oyfos82vwvmxdx91hxaw**.jpg**
- http://res.cloudinary.com/greenplantswap/image/upload/c_crop,g_north,h_0.9,w_1.0/c_fill,d_no_image_icon-720x720**.jpg**,g_center,h_900,q_80,w_900/v1352574989/s09cv3krfn7gbyvw3r2y**.jpg**
- http://res.cloudinary.com/greenplantswap/image/upload/c_crop,g_north,h_0.9,w_1.0/c_fill,d_no_image_icon-720x720**.jpg**,g_center,h_407,q_80,w_407/v1352575010/rl7cl4xi0timza1sgzxj**.jpg**
Wonder if that is confusing Google? If so, none of this is consistent, as they do have a few images indexed with exactly the same kind of URL as those listed above.
-
Thought I had them on email but must be within our fairly cumbersome Skype thread... let me have a dig through when I get chance and I'll post them up here.
-
Hmmmm, okay... Could you post the examples they gave, and an example page where the images are located on the site?
-
Hi Matt,
Thought I should let you know that (i) the X-Robots-Tag was not set, so that's not the issue and (ii) the URLs, although ugly, are not the issue either. We had a couple of examples of websites with the same thing (I'm told the commas facilitate on-the-fly sizing and cropping) and their images were indexed fine.
So, back to the drawing board for me! Thank you very much for the suggestions, really do appreciate it.
Mark
-
Hmm interesting - we hadn't thought of the X-Robots-Tag http header. I'm going to fire that over to the developer now.
As for the URLs, they are awful! But I am told that this is not a problem - but perhaps this is worth re-chasing up as other solutions have, so far, been unfruitful.
Thanks for taking the time to help, Matt - I'll let you know if that fixes it! Unfortunately it could be another week before I know, as the developer is currently working on another project so any changes may be early-mid next week.
Thanks again...
-
This is a bit of a long shot but if the files have been uploaded using their API it may have been that the 'X-Robots-Tag' http header is set to no-index...
Also, those URLs don't look great with the commas in them. Have you tried doing a small subset that just has the image id (e.g. http://res.cloudinary.com/greenplantswap/image/upload/nprvu0z6ri227cgnpmqc.jpg)?
Matt
-
Hi Federico,
Thanks very much for taking the time to respond.
To answer your question, we are using http://cloudinary.com/. So, taking one of the examples from the XML sitemap I posted above, an example of an image URL is http://res.cloudinary.com/greenplantswap/image/upload/c_crop,g_north,h_0.9,w_1.0/c_fill,d_no_image_icon-720x720.jpg,g_center,h_900,q_80,w_900/v1352575097/nprvu0z6ri227cgnpmqc.jpg (what a lovely URL!).
I had a look at http://res.cloudinary.com/robots.txt and it seems that they are not blocking anything - the disallow instruction is commented out. I assume that is indeed the robots.txt I should be looking at?
Assuming it is, this does not appear to get to the bottom of why the images are not being indexed.
Any further assistance would be greatly appreciated - we have 17k unique images that could be driving traffic and this is a key way that people find our kind of website.
Thanks,
Mark
-
Within that robot.txt file on the CDN (which one are you using?) have you set to allow Google to index them?
Most CDNs I know allows you to block engines via the robots.txt to avoid bandwidth consumption.
In the case you are using NetDNA (MaxCDN) or the like, make sure your robots file isn't disallowing robots to crawl.
We are using a CDN too to deliver images and static files and all of them are being indexed, we tested disallowing crawlers but it caused a lot of warnings, so instead we no allow all of them to read and index content (is a small price to pay to have your content indexed).
Hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do internal search results get indexed by Google?
Hi all, Most of the URLs that are created by using the internal search function of a website/web shop shouldn't be indexed since they create duplicate content or waste crawl budget. The standard way to go is to 'noindex, follow' these pages or sometimes to use robots.txt to disallow crawling of these pages. The first question I have is how these pages actually would get indexed in the first place if you wouldn't use one of the options above. Crawlers follow links to index a website's pages. If a random visitor comes to your site and uses the search function, this creates a URL. There are no links leading to this URL, it is not in a sitemap, it can't be found through navigating on the website,... so how can search engines index these URLs that were generated by using an internal search function? Second question: let's say somebody embeds a link on his website pointing to a URL from your website that was created by an internal search. Now let's assume you used robots.txt to make sure these URLs weren't indexed. This means Google won't even crawl those pages. Is it possible then that the link that was used on another website will show an empty page after a while, since Google doesn't even crawl this page? Thanks for your thoughts guys.
Intermediate & Advanced SEO | | Mat_C0 -
Switching from Http to Https, but what about images and image link juice?
Hi Ya'll. I'm transitioning our http version website to https. Important question: Do images have to have 301 redirects? If so, how and where? Please send me a link or explain best practices. Best, Shawn
Intermediate & Advanced SEO | | Shawn1241 -
Sitemap: unique sitemap or different sitemaps by Country
Hi guys, i have a question about sitemaps. We are doing an international site, e.x. www.offers.com for landing page and www.offers.com/br for brazil, www.offers.com/it for italy, etc... i don't if we should do an unique sitemap for all countries or separate sitemaps by country, e.x.: unique sitemap: www.offers.com/sitemap.xml - including all sitemaps www.offers.com/br/sitemap.xml - sitemap for brazil market only. Thank you
Intermediate & Advanced SEO | | thekiller990 -
Should I include URLs that are 301'd or only include 200 status URLs in my sitemap.xml?
I'm not sure if I should be including old URLs (content) that are being redirected (301) to new URLs (content) in my sitemap.xml. Does anyone know if it is best to include or leave out 301ed URLs in a xml sitemap?
Intermediate & Advanced SEO | | Jonathan.Smith0 -
My site shows 503 error to Google bot, but can see the site fine. Not indexing in Google. Help
Hi, This site is not indexed on Google at all. http://www.thethreehorseshoespub.co.uk Looking into it, it seems to be giving a 503 error to the google bot. I can see the site I have checked source code Checked robots Did have a sitemap param. but removed it for testing GWMT is showing 'unreachable' if I submit a site map or fetch Any ideas on how to remove this error? Many thanks in advance
Intermediate & Advanced SEO | | SolveWebMedia0 -
Google indexing only 1 page out of 2 similar pages made for different cities
We have created two category pages, in which we are showing products which could be delivered in separate cities. Both pages are related to cake delivery in that city. But out of these two category pages only 1 got indexed in google and other has not. Its been around 1 month but still only Bangalore category page got indexed. We have submitted sitemap and google is not giving any crawl error. We have also submitted for indexing from "Fetch as google" option in webmasters. www.winni.in/c/4/cakes (Indexed - Bangalore page - http://www.winni.in/sitemap/sitemap_blr_cakes.xml) 2. http://www.winni.in/hyderabad/cakes/c/4 (Not indexed - Hyderabad page - http://www.winni.in/sitemap/sitemap_hyd_cakes.xml) I tried searching for "hyderabad site:www.winni.in" in google but there also http://www.winni.in/hyderabad/cakes/c/4 this link is not coming, instead of this only www.winni.in/c/4/cakes is coming. Can anyone please let me know what could be the possible issue with this?
Intermediate & Advanced SEO | | abhihan0 -
Pages are Indexed but not Cached by Google. Why?
Here's an example: I get a 404 error for this: http://webcache.googleusercontent.com/search?q=cache:http://www.qjamba.com/restaurants-coupons/ferguson/mo/all But a search for qjamba restaurant coupons gives a clear result as does this: site:http://www.qjamba.com/restaurants-coupons/ferguson/mo/all What is going on? How can this page be indexed but not in the Google cache? I should make clear that the page is not showing up with any kind of error in webmaster tools, and Google has been crawling pages just fine. This particular page was fetched by Google yesterday with no problems, and even crawled again twice today by Google Yet, no cache.
Intermediate & Advanced SEO | | friendoffood2 -
XML Sitemap Index Percentage (Large Sites)
Hi all I'm wanting to find out from those who have experience dealing with large sites (10s/100s of millions of pages). What's a typical (or highest) percentage of indexed pages vs. submitted pages you've seen? This information can be found in webmaster tools where Google shows you the pages submitted & indexed for each of your sitemap. I'm trying to figure out whether, The average index % out there There is a ceiling (i.e. will never reach 100%) It's possible to improve the indexing percentage further Just to give you some background, sitemap index files (according to schema.org) have been implemented to improve crawl efficiency and I'm wanting to find out other ways to improve this further. I've been thinking about looking at the URL parameters to exclude as there are hundreds (e-commerce site) to help Google improve crawl efficiency and utilise the daily crawl quote more effectively to discover pages that have not been discovered yet. However, I'm not sure yet whether this is the best path to take or I'm just flogging a dead horse if there is such a ceiling or if I'm already at the average ballpark for large sites. Any suggestions/insights would be appreciated. Thanks.
Intermediate & Advanced SEO | | danng0