Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Google Not Indexing XML Sitemap Images
-
Hi Mozzers,
We are having an issue with our XML sitemap images not being indexed.
The site has over 39,000 pages and 17,500 images submitted in GWT. If you take a look at the attached screenshot, 'GWT Images - Not Indexed', you can see that the majority of the pages are being indexed - but none of the images are.
The first thing you should know about the images is that they are hosted on a content delivery network (CDN), rather than on the site itself. However, Google advice suggests hosting on a CDN is fine - see second screenshot, 'Google CDN Advice'. That advice says to either (i) ensure the hosting site is verified in GWT or (ii) submit in robots.txt. As we can't verify the hosting site in GWT, we had opted to submit via robots.txt.
There are 3 sitemap indexes: 1) http://www.greenplantswap.co.uk/sitemap_index.xml, 2) http://www.greenplantswap.co.uk/sitemap/plant_genera/listings.xml and 3) http://www.greenplantswap.co.uk/sitemap/plant_genera/plants.xml.
Each sitemap index is split up into often hundreds or thousands of smaller XML sitemaps. This is necessary due to the size of the site and how we have decided to pull URLs in. Essentially, if we did it another way, it may have involved some of the sitemaps being massive and thus taking upwards of a minute to load.
To give you an idea of what is being submitted to Google in one of the sitemaps, please see view-source:http://www.greenplantswap.co.uk/sitemap/plant_genera/4/listings.xml?page=1.
Originally, the images were SSL, so we decided to reverted to non-SSL URLs as that was an easy change. But over a week later, that seems to have had no impact. The image URLs are ugly... but should this prevent them from being indexed?
The strange thing is that a very small number of images have been indexed - see http://goo.gl/P8GMn. I don't know if this is an anomaly or whether it suggests no issue with how the images have been set up - thus, there may be another issue.
Sorry for the long message but I would be extremely grateful for any insight into this. I have tried to offer as much information as I can, however please do let me know if this is not enough.
Thank you for taking the time to read and help.
Regards,
Mark
-
Hi Mark,
I'm just following the thread as I have a similar problem. Would you mind sharing your results from the tests?
Thanks,
Bogdan -
Thanks Everett - that's exactly what I intend to do.
We will be testing two new sitemaps with 100 x URLs each. 1. With just the file extension removed and 2. With the entire cropping part of the URL removed, as suggested by Matt.
Will be interested to see whether just one or both of the sitemaps are successful. Will of course post the outcome here, for anyone who might have this problem in future.
-
It isn't always that simple. Maybe commas don't present a problem on their own. Maybe double file extensions don't present a problem on their own. Maybe a CDN doesn't present a problem on its own. Maybe very long, complicated URLs don't present a problem on their own.
You have all of these. Together, in any combination, they could make indexation of your images a problem for Google.
Just test it out on a few. Get rid of the file extension. If that doesn't work, get rid of the comma. That is all you can do. Start with whatever is easiest for the developer to implement, and test it out on a few before rolling it out across all of your images.
-
Cheers for that mate - especially the useful Excel formula.
I am going to try a few things in isolation so that we can accurately say which element/s caused the issue.
Thanks again, mate.
-
Ignore the developer - what worked for one doesn't mean it'll work for you
The easiest way to test this is to manually create a sitemap with 100 or so 'clean' image URLs. Just pull the messy ones into excel and use the formula below to create a clean version (Use A1 for messy, B1 for formula).
Good luck mate.
=CONCATENATE("image:imageimage:lochttp://res.cloudinary.com/greenplantswap/image/upload/",RIGHT(A1,LEN(A1)-(FIND("",(SUBSTITUTE(A1,"/","",(IF(LEN(TRIM(A1))=0,0,LEN(TRIM(A1))-LEN(SUBSTITUTE(A1,"/",""))))))))),"</image:loc></image:image>")
-
Thanks for the responses guys, much appreciated.
In terms of the commas, that was something that I put to the developer, however he was able to come back with examples where this has clearly not been an issue - e.g. apartable.com have commas in their URLs and use the same CDN (Coudinary).
However, I agree with you that double file extension could be the issue. I may have to wait until next week to find out as the developer is working on another project, but will post the outcome here once I know.
Thank you again for the help!
-
Hello Edlondon,
I think you're probably answering your own question here. Google typically doesn't have any problem indexing images served from a CDN. However, I've seen Google have problems with commas in the URL at times. Typically it happens when other elements in the URL are also troublesome, such as your double file extension.
Are you able to rename the files to get rid of the superfluous .jpg extension? If so, I'd recommend trying it out on a few dozen images. We could come up with a lot of hypothesis, but that would be the one I'd test first.
-
Hmmm I step off here, never used cloudinary.com or even heard of them. I personally use NetDNA, with pull zones (which means that they load the image/css/js from your origin and store a version on their servers) while handling cropping/resizing from my own end (via PHP and then loading that image, example: http://cdn.fulltraffic.net/blog/thumb/58x58/youtube-video-xQmQeKU25zg.jpg try changing the 58x58 to another size and my server will handle the crop/resize while NetDNA will serve it and store for future loads).
-
Found one of the sites with the same Cloudinary URLs with commas - apartable.com
See Google image results: https://www.google.co.uk/search?q=site:apartable.com&tbm=isch
Their images appear to be well indexed. One thing I have noticed, however, is that we often have .jpg twice in the image URL. E.g.:
- http://res.cloudinary.com/greenplantswap/image/upload/c_crop,g_north,h_0.9,w_1.0/c_fill,d_no_image_icon-720x720**.jpg**,g_center,h_900,q_80,w_900/v1352574983/oyfos82vwvmxdx91hxaw**.jpg**
- http://res.cloudinary.com/greenplantswap/image/upload/c_crop,g_north,h_0.9,w_1.0/c_fill,d_no_image_icon-720x720**.jpg**,g_center,h_900,q_80,w_900/v1352574989/s09cv3krfn7gbyvw3r2y**.jpg**
- http://res.cloudinary.com/greenplantswap/image/upload/c_crop,g_north,h_0.9,w_1.0/c_fill,d_no_image_icon-720x720**.jpg**,g_center,h_407,q_80,w_407/v1352575010/rl7cl4xi0timza1sgzxj**.jpg**
Wonder if that is confusing Google? If so, none of this is consistent, as they do have a few images indexed with exactly the same kind of URL as those listed above.
-
Thought I had them on email but must be within our fairly cumbersome Skype thread... let me have a dig through when I get chance and I'll post them up here.
-
Hmmmm, okay... Could you post the examples they gave, and an example page where the images are located on the site?
-
Hi Matt,
Thought I should let you know that (i) the X-Robots-Tag was not set, so that's not the issue and (ii) the URLs, although ugly, are not the issue either. We had a couple of examples of websites with the same thing (I'm told the commas facilitate on-the-fly sizing and cropping) and their images were indexed fine.
So, back to the drawing board for me! Thank you very much for the suggestions, really do appreciate it.
Mark
-
Hmm interesting - we hadn't thought of the X-Robots-Tag http header. I'm going to fire that over to the developer now.
As for the URLs, they are awful! But I am told that this is not a problem - but perhaps this is worth re-chasing up as other solutions have, so far, been unfruitful.
Thanks for taking the time to help, Matt - I'll let you know if that fixes it! Unfortunately it could be another week before I know, as the developer is currently working on another project so any changes may be early-mid next week.
Thanks again...
-
This is a bit of a long shot but if the files have been uploaded using their API it may have been that the 'X-Robots-Tag' http header is set to no-index...
Also, those URLs don't look great with the commas in them. Have you tried doing a small subset that just has the image id (e.g. http://res.cloudinary.com/greenplantswap/image/upload/nprvu0z6ri227cgnpmqc.jpg)?
Matt
-
Hi Federico,
Thanks very much for taking the time to respond.
To answer your question, we are using http://cloudinary.com/. So, taking one of the examples from the XML sitemap I posted above, an example of an image URL is http://res.cloudinary.com/greenplantswap/image/upload/c_crop,g_north,h_0.9,w_1.0/c_fill,d_no_image_icon-720x720.jpg,g_center,h_900,q_80,w_900/v1352575097/nprvu0z6ri227cgnpmqc.jpg (what a lovely URL!).
I had a look at http://res.cloudinary.com/robots.txt and it seems that they are not blocking anything - the disallow instruction is commented out. I assume that is indeed the robots.txt I should be looking at?
Assuming it is, this does not appear to get to the bottom of why the images are not being indexed.
Any further assistance would be greatly appreciated - we have 17k unique images that could be driving traffic and this is a key way that people find our kind of website.
Thanks,
Mark
-
Within that robot.txt file on the CDN (which one are you using?) have you set to allow Google to index them?
Most CDNs I know allows you to block engines via the robots.txt to avoid bandwidth consumption.
In the case you are using NetDNA (MaxCDN) or the like, make sure your robots file isn't disallowing robots to crawl.
We are using a CDN too to deliver images and static files and all of them are being indexed, we tested disallowing crawlers but it caused a lot of warnings, so instead we no allow all of them to read and index content (is a small price to pay to have your content indexed).
Hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do internal search results get indexed by Google?
Hi all, Most of the URLs that are created by using the internal search function of a website/web shop shouldn't be indexed since they create duplicate content or waste crawl budget. The standard way to go is to 'noindex, follow' these pages or sometimes to use robots.txt to disallow crawling of these pages. The first question I have is how these pages actually would get indexed in the first place if you wouldn't use one of the options above. Crawlers follow links to index a website's pages. If a random visitor comes to your site and uses the search function, this creates a URL. There are no links leading to this URL, it is not in a sitemap, it can't be found through navigating on the website,... so how can search engines index these URLs that were generated by using an internal search function? Second question: let's say somebody embeds a link on his website pointing to a URL from your website that was created by an internal search. Now let's assume you used robots.txt to make sure these URLs weren't indexed. This means Google won't even crawl those pages. Is it possible then that the link that was used on another website will show an empty page after a while, since Google doesn't even crawl this page? Thanks for your thoughts guys.
Intermediate & Advanced SEO | | Mat_C0 -
Why do people put xml sitemaps in subfolders? Why not just the root? What's the best solution?
Just read this: "The location of a Sitemap file determines the set of URLs that can be included in that Sitemap. A Sitemap file located at http://example.com/catalog/sitemap.xml can include any URLs starting with http://example.com/catalog/ but can not include URLs starting with http://example.com/images/." here: http://www.sitemaps.org/protocol.html#location Yet surely it's better to put the sitemaps at the root so you have:
Intermediate & Advanced SEO | | McTaggart
(a) http://example.com/sitemap.xml
http://example.com/sitemap-chocolatecakes.xml
http://example.com/sitemap-spongecakes.xml
and so on... OR this kind of approach -
(b) http://example/com/sitemap.xml
http://example.com/sitemap/chocolatecakes.xml and
http://example.com/sitemap/spongecakes.xml I would tend towards (a) rather than (b) - which is the best option? Also, can I keep the structure the same for sitemaps that are subcategories of other sitemaps - for example - for a subcategory of http://example.com/sitemap-chocolatecakes.xml I might create http://example.com/sitemap-chocolatecakes-cherryicing.xml - or should I add a sub folder to turn it into http://example.com/sitemap-chocolatecakes/cherryicing.xml Look forward to reading your comments - Luke0 -
Mass Removal Request from Google Index
Hi, I am trying to cleanse a news website. When this website was first made, the people that set it up copied all kinds of articles they had as a newspaper, including tests, internal communication, and drafts. This site has lots of junk, but this kind of junk was on the initial backup, aka before 1st-June-2012. So, removing all mixed content prior to that date, we can have pure articles starting June 1st, 2012! Therefore My dynamic sitemap now contains only articles with release date between 1st-June-2012 and now Any article that has release date prior to 1st-June-2012 returns a custom 404 page with "noindex" metatag, instead of the actual content of the article. The question is how I can remove from the google index all this junk as fast as possible that is not on the site anymore, but still appears in google results? I know that for individual URLs I need to request removal from this link
Intermediate & Advanced SEO | | ioannisa
https://www.google.com/webmasters/tools/removals The problem is doing this in bulk, as there are tens of thousands of URLs I want to remove. Should I put the articles back to the sitemap so the search engines crawl the sitemap and see all the 404? I believe this is very wrong. As far as I know this will cause problems because search engines will try to access non existent content that is declared as existent by the sitemap, and return errors on the webmasters tools. Should I submit a DELETED ITEMS SITEMAP using the <expires>tag? I think this is for custom search engines only, and not for the generic google search engine.
https://developers.google.com/custom-search/docs/indexing#on-demand-indexing</expires> The site unfortunatelly doesn't use any kind of "folder" hierarchy in its URLs, but instead the ugly GET params, and a kind of folder based pattern is impossible since all articles (removed junk and actual articles) are of the form:
http://www.example.com/docid=123456 So, how can I bulk remove from the google index all the junk... relatively fast?0 -
Substantial difference between Number of Indexed Pages and Sitemap Pages
Hey there, I am doing a website audit at the moment. I've notices substantial differences in the number of pages indexed (search console), the number of pages in the sitemap and the number I am getting when I crawl the page with screamingfrog (see below). Would those discrepancies concern you? The website and its rankings seems fine otherwise. Total indexed: 2,360 (Search Consule)
Intermediate & Advanced SEO | | Online-Marketing-Guy
About 2,920 results (Google search "site:example.com")
Sitemap: 1,229 URLs
Screemingfrog Spider: 1,352 URLs Cheers,
Jochen0 -
"Null" appearing as top keyword in "Content Keywords" under Google index in Google Search Console
Hi, "Null" is appearing as top keyword in Google search console > Google Index > Content Keywords for our site http://goo.gl/cKaQ4K . We do not use "null" as keyword on site. We are not able to find why Google is treating "null" as a keyword for our site. Is anyone facing such issue. Thanks & Regards
Intermediate & Advanced SEO | | vivekrathore0 -
URL Injection Hack - What to do with spammy URLs that keep appearing in Google's index?
A website was hacked (URL injection) but the malicious code has been cleaned up and removed from all pages. However, whenever we run a site:domain.com in Google, we keep finding more spammy URLs from the hack. They all lead to a 404 error page since the hack was cleaned up in the code. We have been using the Google WMT Remove URLs tool to have these spammy URLs removed from Google's index but new URLs keep appearing every day. We looked at the cache dates on these URLs and they are vary in dates but none are recent and most are from a month ago when the initial hack occurred. My question is...should we continue to check the index every day and keep submitting these URLs to be removed manually? Or since they all lead to a 404 page will Google eventually remove these spammy URLs from the index automatically? Thanks in advance Moz community for your feedback.
Intermediate & Advanced SEO | | peteboyd0 -
How can I get a list of every url of a site in Google's index?
I work on a site that has almost 20,000 urls in its site map. Google WMT claims 28,000 indexed and a search on Google shows 33,000. I'd like to find what the difference is. Is there a way to get an excel sheet with every url Google has indexed for a site? Thanks... Mike
Intermediate & Advanced SEO | | 945010 -
Xml sitemap advice for website with over 100,000 articles
Hi, I have read numerous articles that support submitting multiple XML sitemaps for websites that have thousands of articles... in our case we have over 100,000. So, I was thinking I should submit one sitemap for each news category. My question is how many page levels should each sitemap instruct the spiders to go? Would it not be enough to just submit the top level URL for each category and then let the spiders follow the rest of the links organically? So, if I have 12 categories the total number of URL´s will be 12??? If this is true, how do you suggest handling or home page, where the latest articles are displayed regardless of their category... so I.E. the spiders will find l links to a given article both on the home page and in the category it belongs to. We are using canonical tags. Thanks, Jarrett
Intermediate & Advanced SEO | | jarrett.mackay0