Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Google doesn't index image slideshow
-
Hi,
My articles are indexed and images (full size) via a meta in the body also. But, the images in the slideshow are not indexed, have you any idea? A problem with the JS
Example : http://www.parismatch.com/People/Television/Sport-a-la-tele-les-femmes-a-l-abordage-962989
Thank you in advance
Julien
-
You can do a "site:" search directly in Google like this and I currently see this --> http://screencast.com/t/ZVqq5iumQ - you can probably do a site: search on the whole domain, a subfolder or a specific page etc.
-
Ok, what is the best method that you recommend for verify images indexation directly in Google ?
I would post a message explaining the change after change sitemaps.
Thanks for all
-
Thanks! OK, yes I'd make your Sitemap and HTML image URLs the same.
Also, that's a LOT of images, so I'm not surprised Google is taking time to index them.
Also, there can sometimes be a delay in Search Console data. You can always be checking Google itself to see what files are indexed.
-
Not really, it seem be ok
-
Thanks! Hmmm did it clear Search Console without any errors? I see an error in my browser --> http://screencast.com/t/VLWhg8EyR3Dd
-
The images are here :
http://www.parismatch.com/var/exports/sitemaps/sitemap_images_parismatch-10.xml
-
Is this your current sitemap?
http://www.parismatch.com/var/exports/sitemaps/sitemap_parismatch-index.xml
What is the direct address of the image sitemap(s)?
Thanks!
-
Thanks Dan. Unfortunately, we have changed the images of host, on a different CDN...
Before the redesign, we used exactly this configuration, visible on this page (it's just an article, we don't have a slideshow example):
http://www.parismatch.com/Chroniques/Art-de-vivre/Lodge-Story-925785We have perhaps a problem with the image sitemaps because we have in Google Sitemaps:
<image: loc="">http://cdn-parismatch.ladmedia.fr/var/news/storage/images/paris-match/culture/cinema/le-fils-de-saul-la-critique-763334/8067828-1-fre-FR/Le-Fils-de-Saul-la-critique.jpg</image:>
and in the HTML source:
the perhaps should be put in the same sitempas URLs as used in HTML?
Many thanks for your help !
-
I see, thanks. Hmmm... did anything else change besides the re-design? Did the images URLs change, or did where they were being hosted change?
The current implementation doesn't show any issues, but I wonder if things were properly done in moving to the new design. Did you always have a slideshow format? Did the code change or just the design?
-
Thanks Dan !
I'm agree with you. It's problematic because since website redesign, we record a fall of images traffic by Google
-
Hi There
There does not appear to be any accessibility issues. I can crawl and access the images just fine with my crawler.
My guess is that since the images are duplicate, and they also exist on other websites, Google may be avoiding indexing them since they already are indexed and they are technically not being linked to with a normal tag.
Is this causing a particular issue for the site? Or is it just a pesky technical bug?
-
The display image is resized and indexed :
and the full size image is in META but not indexed :
-
How are your images being fed into the site? Are you using a CDN?
-Andy
-
The robots.txt file doesn't block the images, I check it. The website is under Easy Publish.
-
Hi Julien,
I always start with robots.txt in these cases, but that looks OK.
Is anything being blocked by JS? Something else to look at is if you are using something like Wordpress, there are plugins that can block access to these without you realising.
Looking at the URL of the image, this appears to be hosted on a 3rd party site?
-Andy
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to stop URLs that include query strings from being indexed by Google
Hello Mozzers Would you use rel=canonical, robots.txt, or Google Webmaster Tools to stop the search engines indexing URLs that include query strings/parameters. Or perhaps a combination? I guess it would be a good idea to stop the search engines crawling these URLs because the content they display will tend to be duplicate content and of low value to users. I would be tempted to use a combination of canonicalization and robots.txt for every page I do not want crawled or indexed, yet perhaps Google Webmaster Tools is the best way to go / just as effective??? And I suppose some use meta robots tags too. Does Google take a position on being blocked from web pages. Thanks in advance, Luke
Intermediate & Advanced SEO | | McTaggart0 -
Can't generate a sitemap with all my pages
I am trying to generate a site map for my site nationalcurrencyvalues.com but all the tools I have tried don't get all my 70000 html pages... I have found that the one at check-domains.com crawls all my pages but when it writes the xml file most of them are gone... seemingly randomly. I have used this same site before and it worked without a problem. Can anyone help me understand why this is or point me to a utility that will map all of the pages? Kindly, Greg
Intermediate & Advanced SEO | | Banknotes0 -
How can I make a list of all URLs indexed by Google?
I started working for this eCommerce site 2 months ago, and my SEO site audit revealed a massive spider trap. The site should have been 3500-ish pages, but Google has over 30K pages in its index. I'm trying to find a effective way of making a list of all URLs indexed by Google. Anyone? (I basically want to build a sitemap with all the indexed spider trap URLs, then set up 301 on those, then ping Google with the "defective" sitemap so they can see what the site really looks like and remove those URLs, shrinking the site back to around 3500 pages)
Intermediate & Advanced SEO | | Bryggselv.no0 -
Mass Removal Request from Google Index
Hi, I am trying to cleanse a news website. When this website was first made, the people that set it up copied all kinds of articles they had as a newspaper, including tests, internal communication, and drafts. This site has lots of junk, but this kind of junk was on the initial backup, aka before 1st-June-2012. So, removing all mixed content prior to that date, we can have pure articles starting June 1st, 2012! Therefore My dynamic sitemap now contains only articles with release date between 1st-June-2012 and now Any article that has release date prior to 1st-June-2012 returns a custom 404 page with "noindex" metatag, instead of the actual content of the article. The question is how I can remove from the google index all this junk as fast as possible that is not on the site anymore, but still appears in google results? I know that for individual URLs I need to request removal from this link
Intermediate & Advanced SEO | | ioannisa
https://www.google.com/webmasters/tools/removals The problem is doing this in bulk, as there are tens of thousands of URLs I want to remove. Should I put the articles back to the sitemap so the search engines crawl the sitemap and see all the 404? I believe this is very wrong. As far as I know this will cause problems because search engines will try to access non existent content that is declared as existent by the sitemap, and return errors on the webmasters tools. Should I submit a DELETED ITEMS SITEMAP using the <expires>tag? I think this is for custom search engines only, and not for the generic google search engine.
https://developers.google.com/custom-search/docs/indexing#on-demand-indexing</expires> The site unfortunatelly doesn't use any kind of "folder" hierarchy in its URLs, but instead the ugly GET params, and a kind of folder based pattern is impossible since all articles (removed junk and actual articles) are of the form:
http://www.example.com/docid=123456 So, how can I bulk remove from the google index all the junk... relatively fast?0 -
Any tips on how tp get reddit or pinterest posts rank high on google images?
Hello I have noticed that for a keyword that has high competition it has on top image searches not that popular pinterest post & a reddit post, explorergram , youtube etc., the keywork is "24k gold iphone" and I am wondering if I could create somehow myself a pinterest or reddit post or something similar that would have images with my product rank high on that keyword, since my website does not rank well in mage search for some reason... https://www.google.fi/search?q=24k+gold+iphone+6&source=lnms&tbm=isch&sa=X&ved=0CAcQ_AUoAWoVChMI1f2LkpTxxgIVhI8sCh1SGwjy&biw=978&bih=550#tbm=isch&q=24k+gold+iphone thanks a lot
Intermediate & Advanced SEO | | bidilover0 -
Should you allow an auto dealer's inventory to be indexed?
Due to the way most auto dealership website populate inventory pages, should you allow inventory to be indexed at all? The main benefit us more content. The problem is it creates duplicate, or near duplicate content. It also creates a ton of crawl errors since the turnover is so short and fast. I would love some help on this. Thanks!
Intermediate & Advanced SEO | | Gauge1230 -
Best practice for removing indexed internal search pages from Google?
Hi Mozzers I know that it’s best practice to block Google from indexing internal search pages, but what’s best practice when “the damage is done”? I have a project where a substantial part of our visitors and income lands on an internal search page, because Google has indexed them (about 3 %). I would like to block Google from indexing the search pages via the meta noindex,follow tag because: Google Guidelines: “Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don't add much value for users coming from search engines.” http://support.google.com/webmasters/bin/answer.py?hl=en&answer=35769 Bad user experience The search pages are (probably) stealing rankings from our real landing pages Webmaster Notification: “Googlebot found an extremely high number of URLs on your site” with links to our internal search results I want to use the meta tag to keep the link juice flowing. Do you recommend using the robots.txt instead? If yes, why? Should we just go dark on the internal search pages, or how shall we proceed with blocking them? I’m looking forward to your answer! Edit: Google have currently indexed several million of our internal search pages.
Intermediate & Advanced SEO | | HrThomsen0 -
Disallowed Pages Still Showing Up in Google Index. What do we do?
We recently disallowed a wide variety of pages for www.udemy.com which we do not want google indexing (e.g., /tags or /lectures). Basically we don't want to spread our link juice around to all these pages that are never going to rank. We want to keep it focused on our core pages which are for our courses. We've added them as disallows in robots.txt, but after 2-3 weeks google is still showing them in it's index. When we lookup "site: udemy.com", for example, Google currently shows ~650,000 pages indexed... when really it should only be showing ~5,000 pages indexed. As another example, if you search for "site:udemy.com/tag", google shows 129,000 results. We've definitely added "/tag" into our robots.txt properly, so this should not be happening... Google showed be showing 0 results. Any ideas re: how we get Google to pay attention and re-index our site properly?
Intermediate & Advanced SEO | | udemy0