Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Not all images indexed in Google
-
Hi all,
Recently, got an unusual issue with images in Google index. We have more than 1,500 images in our sitemap, but according to Search Console only 273 of those are indexed. If I check Google image search directly, I find more images in index, but still not all of them.
For example this post has 28 images and only 17 are indexed in Google image. This is happening to other posts as well.
Checked all possible reasons (missing alt, image as background, file size, fetch and render in Search Console), but none of these are relevant in our case. So, everything looks fine, but not all images are in index.
Any ideas on this issue?
Your feedback is much appreciated, thanks
-
Fetching, rendering, caching and indexing are all different. Sometimes they're all part of the same process, sometimes not. When Google 'indexes' images, that's primarily for its image search engine (Google Images). 'Indexing' something means that Google is listing that resource within its own search results for one reason or another. For the same reasons that Google rarely indexes all of your web-pages, Google also rarely indexes all of your images.
That doesn't mean that Google 'can't see' your images and has an imperfect view of your web-page. It simply means that Google does not believe the image which you have uploaded are 'worthy' enough to be served to an end-user who is performing a certain search on Google images. If you think that gaining normal web rankings is tricky, remember that most users only utilise Google images for certain (specific) reasons. Maybe they're trying to find a meme to add to their post on a form thread or as a comment on a Social network. Maybe they're looking for PNG icons to add into their PowerPoint presentations.
In general, images from the commercial web are... well, they're commercially driven (usually). When was the last time you expressedly set out to search for Ads to look at on Google images? Never? Ok then.
First Google will fetch a page or resource by visiting that page or resource's URL. If the resource or web-page is of moderate to high value, Google may then render the page or resource (Google doesn't always do this, but usually it's to get a good view of a page on the web which is important - yet which is heavily modified by something like JS or AJAX - and thus all the info isn't in the basic 'source code' / view-source).
Following this, Google may decide to cache the web-page or resource. Finally, if the page or resource is deemed worthy enough and Google's algorithm(s) decide that it could potentially satisfy a certain search query (or array thereof) - the resource or page may be indexed. All of this can occur in various patterns, e.g: indexing a resource without caching it or caching a resource without indexing it (there are many reasons for all of this which I won't get into now).
On the commercial web, many images are stock or boiler-plate visuals from suppliers. If Google already has the image you are supplying indexed at a higher resolution or at superior quality (factoring compression) and if your site is not a 'main contender' in terms of popularity and trust metrics, Google probably won't index that image on your site. Why would Google do so? It would just mean that when users performed an image search, they would see large panes of results which were all the same image. Users only have so much screen real-estate (especially with the advent of mobile browsing popularity). Seeing loads of the same picture at slightly different resolutions would just be annoying. People want to see a variety, a spread of things! **That being said **- your images are lush and I don't think they're stock rips!
If some images on your page, post or website are not indexed - it's not necessarily an 'issue' or 'error'.
Looking at the post you linked to: https://flothemes.com/best-lightroom-presets-photogs/
I can see that it sits on the "flothemes.com" domain. It has very strong link and trust metrics:
Ahrefs - Domain rating 83
Moz - Domain Authority - 62
As such, you'd think that most of these images would be unique (I don't have time to do a reverse image search on all of them) - also because the content seems really well done. I am pretty confident (though not certain) that quality and duplication are probably not to blame in this instance.
That makes me think, hmm maybe some of the images don't meet Google's compression standards.
Check out these results (https://gtmetrix.com/reports/flothemes.com/xZARSfi5) for the page / post you referenced, on GTMetrix (I find it superior to Google's Page-Speed Insights) and click on the "Waterfall" tab.
You can see that some of the image files have pretty lard 'bars' in terms of the total time it took to load in those individual resources. The main offenders are this image: https://l5vd03xwb5125jimp1nwab7r-wpengine.netdna-ssl.com/wp-content/uploads/2016/01/PhilChester-Portfolio-40.jpg (over 2 seconds to pull in by itself) and this one: https://l5vd03xwb5125jimp1nwab7r-wpengine.netdna-ssl.com/wp-content/uploads/2017/04/Portra-1601-Digital-2.png (around 1.7 seconds to pull in)
Check out the resource URLs. They're being pulled into your page, but they're not hosted on your website. As such - how could Google index those images for your site when they're pulled in externally? Maybe there's some CDN stuff going on here. Maybe Google is indexing some images on the CDN because it's faster and not from your base-domain. This really needs looking into in a lot more detail, but I smell the tails of something interesting there.
If images are deemed to be uncompressed or if their resolution is just way OTT (such that most users would never need even half of the full deployment resolution) - Google won't index those images. Why? Well they don't want Google Images to become a lag-fest I guess!
**Your main issue is that you are not serving 'scaled' images **(or apparently, optimising them). On that same GTMetrix report, check out the "PageSpeed" tab. Yeah, you scored an F by the way (that's a fail) and it's mainly down to your image deployment.
Google thinks one or more of the following:
- You haven't put enough effort into optimising some of your images
- Some of your images are not worth indexing or it can find them somewhere else
- Google is indexing some of the images from your CDN instead of your base domain
- Google is having trouble indexing images for your domain, which are permanently or temporarily stored off-site (and the interference is causing Google to just give up)
I know there's a lot to think about here, but I hope I have at least put you on the 'trail' a reasonable solution
This was fun to examine, so thanks for the interesting question!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google has deindexed a page it thinks is set to 'noindex', but is in fact still set to 'index'
A page on our WordPress powered website has had an error message thrown up in GSC to say it is included in the sitemap but set to 'noindex'. The page has also been removed from Google's search results. Page is https://www.onlinemortgageadvisor.co.uk/bad-credit-mortgages/how-to-get-a-mortgage-with-bad-credit/ Looking at the page code, plus using Screaming Frog and Ahrefs crawlers, the page is very clearly still set to 'index'. The SEO plugin we use has not been changed to 'noindex' the page. I have asked for it to be reindexed via GSC but I'm concerned why Google thinks this page was asked to be noindexed. Can anyone help with this one? Has anyone seen this before, been hit with this recently, got any advice...?
Technical SEO | | d.bird0 -
Does Google index internal anchors as separate pages?
Hi, Back in September, I added a function that sets an anchor on each subheading (h[2-6]) and creates a Table of content that links to each of those anchors. These anchors did show up in the SERPs as JumpTo Links. Fine. Back then I also changed the canonicals to a slightly different structur and meanwhile there was some massive increase in the number of indexed pages - WAY over the top - which has since been fixed by removing (410) a complete section of the site. However ... there are still ~34.000 pages indexed to what really are more like 4.000 plus (all properly canonicalised). Naturally I am wondering, what google thinks it is indexing. The number is just way of and quite inexplainable. So I was wondering: Does Google save JumpTo links as unique pages? Also, does anybody know any method of actually getting all the pages in the google index? (Not actually existing sites via Screaming Frog etc, but actual pages in the index - all methods I found sadly do not work.) Finally: Does somebody have any other explanation for the incongruency in indexed vs. actual pages? Thanks for your replies! Nico
Technical SEO | | netzkern_AG0 -
Vanity URLs are being indexed in Google
We are currently using vanity URLs to track offline marketing, the vanity URL is structured as www.clientdomain.com/publication, this URL then is 302 redirected to the actual URL on the website not a custom landing page. The resulting redirected URL looks like: www.clientdomain.com/xyzpage?utm_source=print&utm_medium=print&utm_campaign=printcampaign. We have started to notice that some of the vanity URLs are being indexed in Google search. To prevent this from happening should we be using a 301 redirect instead of a 302 and will the Google index ignore the utm parameters in the URL that is being 301 redirect to? If not, any suggestions on how to handle? Thanks,
Technical SEO | | seogirl221 -
Why google indexed pages are decreasing?
Hi, my website had around 400 pages indexed but from February, i noticed a huge decrease in indexed numbers and it is continually decreasing. can anyone help me to find out the reason. where i can get solution for that? will it effect my web page ranking ?
Technical SEO | | SierraPCB0 -
How to remove all sandbox test site link indexed by google?
When develop site, I have a test domain is sandbox.abc.com, this site contents are same as abc.com. But, now I search site:sandbox.abc.com and aware of content duplicate with main site abc.com My question is how to remove all this link from goolge. p/s: I have just add robots.txt to sandbox and disallow all pages. Thanks,
Technical SEO | | JohnHuynh0 -
What to do with 302 redirects being indexed
Hi there, Our site's forums include permalinks that for some reason uses an intermediary URL that 302 redirects to the URL with the permalink anchor. For example: http://en.tradimo.com/learn/chart-analysis/time-frames/ In the comments, there is a permalink to the following URL; en.tradimo.com/co/50c450005f2b949e3200001b/ (there is no content here, and never has been). This URL 302 redirects to the following final URL: http://en.tradimo.com/learn/chart-analysis/time-frames/?offset=0&limit=20#50c450005f2b949e3200001b The problem is, Google is indexing the redirect URL (en.tradimo.com/co/50c450005f2b949e3200001b/) and showing duplicate content even though we are using the nofollow tag on these links. Ideally, we would directly use the last link rather than redirecting. Alternatively, I'd say a 301 redirect would be preferable. But if both aren't available, is there a way to get these pages out of the index? Is the canonical tag the best way? I really wish I could just add /co/ to the robots.txt file, but I think they would still be in the index, right? Thanks for your help!
Technical SEO | | etruvian0 -
Unnecessary pages getting indexed in Google for my blog
I have a blog dapazze.com and I am suffering from a problem for a long time. I found out that Google have indexed hundreds of replytocom links and images attachment pages for my blog. I had to remove these pages manually using the URL removal tool. I had used "Disallow: ?replytocom" in my robots.txt, but Google disobeyed it. After that, I removed the parameter from my blog completely using the SEO by Yoast plugin. But now I see that Google has again started indexing these links even after they are not present in my blog (I use #comment). Google have also indexed many of my admin and plugin pages, whereas they are disallowed in my robots.txt file. Have a look at my robots.txt file here: http://dapazze.com/robots.txt Please help me out to solve this problem permanently?
Technical SEO | | rahulchowdhury0