Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Image Search
-
Hello Community,
I have been reading and researching about image search and trying to find patterns within the results but unfortunately I could not get to a conclusion on 2 matters. Hopefully this community would have the answers I am searching for.
1) Watermarked Images (To remove or not to remove watermark from photos) I see a lot of confusion on this subject and am pretty much confused myself. Although it might be true that watermarked photos do not cause a punishment, it sure does not seem to help.
At least in my industry and on a bunch of different random queries I have made, watermarked images are hard to come by on Google's images results. Usually the first results do not have any watermarks.
I have read online that Google takes into account user behavior and most users prefer images with no watermark. But again, it is something "I have read online" so I don't have any proof. I would love to have further clarification and, if possible, a definite guide on how to improve my image results.
2) Multiple nested folders (Folder depth)
Due to speed concerns our tech guys are using 1 image per folder and created a convoluted folder structure where the photos are actually 9 levels deep. Most of our competition and many small Wordpress blogs outrank us on Google images and on ALL INSTANCES I have checked, their photos are 3, 4 or 5 levels deep. Never inside 9 nested folders.
So...A) Should I consider removing the watermark - which is not that intrusive but is visible?
B) Should I try to simplify the folder structure for my photos?Thank you
-
Thank you very much. This is helpful.
Sincerely,
Koki
-
Hi Mike
On the watermark question I would personally remove the watermark as I believe you will find that whether watermarks impact your ranking or not putting people of clicking and interacting with your images is a negative.
I would also do it from a quality point of view and I would draw your attention to Google's Guidelines on Image Publishing
"Similarly, some people add copyright text, watermarks, or other information to their images. This kind of information won't impact your image's performance in search results, and does help photographers claim credit for their work and deter unknown usage. However, if a feature such as watermarking reduces the user-perceived quality of your image or your image's thumbnail, users may click it less often in search results."
I imagine you have already had a look at this and I would recommend you go with your findings on this.
Here are Google's guidelines to Image Publishing - https://support.google.com/webmasters/answer/114016
I would also remove the watermark from you images in terms of wanting people to use images and then doing a reverse image search to find sites that use them. I would then request attribution if you haven't already been given it - great way to get exposure.
I would also try to simplify your folder structure as 9 levels deep is very deep and likely to make Googles crawl of your images less efficient. I don't understand the reasoning of an individual image per folder - something more like images segmented by subject or even month like default WordPress would make more sense.
Do you have an image sitemap in place? If not here is some more info from Google - https://support.google.com/webmasters/answer/178636?hl=en
Hope this helps
Matt
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google image search filter tabs and how to rank on them
I have noticed Google image search has included suggestion tabs (e.g,. design, nature... when searching background) on the top of the image search.
Technical SEO | | Mike555
Are there specific meta tags I can add into my images so that my images will show up on each tab?
Do those filters just show content based on image keywords or something else? IRme7gQ0 -
Why images are not getting indexed and showing in Google webmaster
Hi, I would like to ask why our website images not indexing in Google. I have shared the following screenshot of the search console. https://www.screencast.com/t/yKoCBT6Q8Upw Last week (Friday 14 Sept 2018) it was showing 23.5K out 31K were submitted and indexed by Google. But now, it is showing only 1K 😞 Can you please let me know why might this happen, why images are not getting indexed and showing in Google webmaster.
Technical SEO | | 21centuryweb0 -
Parked domain is first in search results
We have several brand related domains which are parked and pointing to our main website. Some of these websites are redirecting using a 302 (don't ask, that's a whole other story), but these are being changed. But it shouldn't matter what type of redirect they are no? Since there has never been any traffic and they are not indexed? But it seems that one of them was indexed: exotravel.vn. A search for our brand name or the previous brand name (exotravel and exotissimo) brings up this parked domain first! How can that be? The domain has never been used and has no backlinks. exotravel.vn is redirecting and I submitted a change of address weeks ago to Google, but its still coming up first in all brand name searches for exotissimo or exotravel.
Technical SEO | | Exotissimo0 -
Difference between search volume in KWT and Impressions in GWT
Hi there, Sorry I've been a bit quiet of late, we're going through a huge rebranding exercise as well as staying on top of client work. Anyway. I've got an issue with keyword research phase of a client remarketing. Trying to decide which keywords to target (aren't we all?) The client has 3 months of back data in Google Webmaster Tools, which helps us to see Impressions, CTR and actual click-throughs etc. Now, they rank #1 on Google.com for a certain keyword (logged out, of course). According to Google Keyword tool (Logged in) there are 2.7million searches per month for this keyword. With the average CTR being 18% for a #1 keyword that should be bringing in 400k visits. However, take the same keyword in Google Webmaster Tools and the impressions are actually around 1,600 per month with a CTR of 9%. Different CTR's for different sectors I can accept. What I don't get is the vast difference between the impressions in GWT compared to the alleged search volume coming from the Keyword tool. Really need to understand this so we can better select keywords and judge approximate traffic expected if ranking #1 for a keyword. Any help would be really useful. Thank you!
Technical SEO | | Nobody15609869897230 -
Internal search : rel=canonical vs noindex vs robots.txt
Hi everyone, I have a website with a lot of internal search results pages indexed. I'm not asking if they should be indexed or not, I know they should not according to Google's guidelines. And they make a bunch of duplicated pages so I want to solve this problem. The thing is, if I noindex them, the site is gonna lose a non-negligible chunk of traffic : nearly 13% according to google analytics !!! I thought of blocking them in robots.txt. This solution would not keep them out of the index. But the pages appearing in GG SERPS would then look empty (no title, no description), thus their CTR would plummet and I would lose a bit of traffic too... The last idea I had was to use a rel=canonical tag pointing to the original search page (that is empty, without results), but it would probably have the same effect as noindexing them, wouldn't it ? (never tried so I'm not sure of this) Of course I did some research on the subject, but each of my finding recommanded one of the 3 methods only ! One even recommanded noindex+robots.txt block which is stupid because the noindex would then be useless... Is there somebody who can tell me which option is the best to keep this traffic ? Thanks a million
Technical SEO | | JohannCR0 -
How should I structure a site with multiple addresses to optimize for local search??
Here's the setup: We have a website, www.laptopmd.com, and we're ranking quite well in our geographic target area. The site is chock-full of local keywords, has the address properly marked up, html5 and schema.org compliant, near the top of the page, etc. It's all working quite well, but we're looking to expand to two more locations, and we're terrified that adding more addresses and playing with our current set-up will wreak havoc with our local search results, which we quite frankly currently rock. My question is 1)when it comes time to doing sub-pages for the new locations, should we strip the location information from the main site and put up local pages for each location in subfolders? 1a) should we use subdomains instead of subfolders to keep Google from becoming confused? Should we consider simply starting identically branded pages for the individual locations and hope that exact-match location-based urls will make up for the hit for duplicate content and will overcome the difficulty of building a brand from multiple pages? I've tried to look for examples of businesses that have tried to do what we're doing, but all the advice has been about organic search, which i already have the answer to. I haven't been able to really find a good example of a small business with multiple locations AND good rankings for each location. Should this serve as a warning to me?
Technical SEO | | LMDNYC0 -
Loss of search engine positions after 301 redirect - what went wrong?!?
Hi Guys After adhering to the On Page optimisation suggestions given by SEOmoz, we redirected some of old urls to new ones. We set 301 redirects from the old pages to new on a page by page basis but our search engine ranking subsequently fell off the radar and lost PR. We confirmed redirection with fiddler and it shows 301 permanent redirect on every page as expected. To manage redirection using a common code logic we executed following: In Http module, using “rewrite path” we route “all old page requests” to a page called “redirect.aspx? oldpagename =[oldpagename]”. This happens at server side. In redirect.aspx we are redirecting from old page to new page using 301 permanent redirect. In the browser, when old page is requested, it will 301 redirect to new page. In hope we and others can learn from our mistakes - what did we do wrong ?!? Thanks in advance. Dave - www.paysubsonline.com
Technical SEO | | Evo0 -
Should I set up a disallow in the robots.txt for catalog search results?
When the crawl diagnostics came back for my site its showing around 3,000 pages of duplicate content. Almost all of them are of the catalog search results page. I also did a site search on Google and they have most of the results pages in their index too. I think I should just disallow the bots in the /catalogsearch/ sub folder, but I'm not sure if this will have any negative effect?
Technical SEO | | JordanJudson0