Questions about duplicate photo content?
-
I know that Google is a mystery, so I am not sure if there are answers to these questions, but I'm going to ask anyway! I recently realized that Google is not happy with duplicate photo content. I'm a photographer and have sold many photos in the past (but retained the rights for) that I am now using on my site. My recent revelations means that I'm now taking down all of these photos. So I've been reverse image searching all of my photos to see if I let anyone else use it first, and in the course of this I found out that there are many of my photos being used by other sites on the web. So my questions are:
With photos that I used first and others have stolen, If I edit these photos (to add copyright info) and then re-upload them, will the sites that are using these images then get credit for using the original image first?
If I have a photo on another one of my own sites and I take it down, can I safely use that photo on my main site, or will Google retain the knowledge that it's been used somewhere else first?
If I sold a photo and it's being used on another site, can I safely use a different photo from the same series that is almost exactly the same? I am unclear what data from the photo Google is matching, and if they can tell the difference between photos that were taken a few seconds apart.
-
I do now see where he did also claim visa versa that using originals wont help either.
why would matt or rand mention it? they don't work for google they will not know everything.
But he also said to the best of his knowledge, at that time, and that in the future that may change and it was 2 years ago. So I would still try to get original images if you can.
-
I don't know where you get the idea that Google is not happy with duplicate photo content but there no such thing as that. If there is, Matt or Rand should have already mentioned it. But there is none, and at the very least I already provided you an evidence to support my claim. Would you rather rely on other presumptions?
FYI, even duplicate content has its own stand on Google. You see, contents and photos are meant to be shared, so as long as you're not doing spammy things on the web, Google has no reason to punish you. Cheer up!
-
But that video is two years old and he says that using duplicate photo content is a good idea for a future quality signal. So I'm not sure that it's still relevant. But I guess we can't prove it one way or the other!
-
But they may reward you for unique images.
-
I think you might want to read this: https://www.plagiarismtoday.com/2013/07/29/google-we-dont-penalize-duplicate-images/.
-
Why do you think Google doesn't penalize for duplicate photo content? It seems like it would be a great way to find scrapers and those with low-value content.
-
Hi Lina,
I understand your trouble Lina but worry not because Google doesn't penalize duplicate photo content. On the other hand, you can optimize so it can be found easily by adding short but concise meta tag title and meta description.
You don't need to edit the Photos, but you're free to add copyright info. However, I don't think it is necessary for the photos you sold.
It is also not necessary to take down existing photos from one of your sites. But if it were me, I would add a referral link (stating that they can access more photos: or original version of:) to your main site. Not only your audience will take notice of your main site but it will also improve your main site's ranking in search results.
True, Google can tell the difference between photos that were taken a few seconds apart, but there is no reason it should be a big issue. Photos were meant to be shared from the start.
Uhmm I'm getting the image of you wanting to gather audience for your main site immediately. Regarding this, you might try Google Adwords for getting audience faster.
Hope this helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to fix Duplicate Content Warnings on Pagination? Indexed Pagination?
Hi all! So we have a Wordpress blog that properly has pagination tags of rel="prev" and rel="next" set up for pages, but we're still getting crawl errors with MOZ for duplicate content on all of our pagination pages. Also, we are having all of our pages indexed as well. I'm talking pages as deep as page 89 for the home page. Is this something I should ignore? Is it hurting my SEO potentially? If so, how can I start tackling it for a fix? Would "noindex" or "nofollow" be a good idea? Any help would be greatly appreciated!
Intermediate & Advanced SEO | | jampaper0 -
Possible duplicate content issue
Hi, Here is a rather detailed overview of our problem, any feedback / suggestions is most welcome. We currently have 6 sites targeting the various markets (countries) we operate in all websites are on one wordpress install but are separate sites in a multisite network, content and structure is pretty much the same barring a few regional differences. The UK site has held a pretty strong position in search engines the past few years. Here is where we have the problem. Our strongest page (from an organic point of view) has dropped off the search results completely for Google.co.uk, we've picked this up through a drop in search visibility in SEMRush, and confirmed this by looking at our organic landing page traffic in Google Analytics and Search Analytics in Search Console. Here are a few of the assumptions we've made and things we've checked: Checked for any Crawl or technical issues, nothing serious found Bad backlinks, no new spammy backlinks Geotarggetting, this was fine for the UK site, however the US site a .com (not a cctld) was not set to the US (we suspect this to be the issue, but more below) On-site issues, nothing wrong here - the page was edited recently which coincided with the drop in traffic (more below), but these changes did not impact things such as title, h1, url or body content - we replaced some call to action blocks from a custom one to one that was built into the framework (Div) Manual or algorithmic penalties: Nothing reported by search console HTTPs change: We did transition over to http at the start of june. The sites are not too big (around 6K pages) and all redirects were put in place. Here is what we suspect has happened, the https change triggered google to re-crawl and reindex the whole site (we anticipated this), during this process, an edit was made to the key page, and through some technical fault the page title was changed to match the US version of the page, and because geotargetting was not turned on for the US site, Google filtered out the duplicate content page on the UK site, there by dropping it off the index. What further contributes to this theory is that a search of Google.co.uk returns the US version of the page. With country targeting on (ie only return pages from the UK) that UK version of the page is not returned. Also a site: query from google.co.uk DOES return the Uk version of that page, but with the old US title. All these factors leads me to believe that its a duplicate content filter issue due to incorrect geo-targetting - what does surprise me is that the co.uk site has much more search equity than the US site, so it was odd that it choose to filter out the UK version of the page. What we have done to counter this is as follows: Turned on Geo targeting for US site Ensured that the title of the UK page says UK and not US Edited both pages to trigger a last modified date and so the 2 pages share less similarities Recreated a site map and resubmitted to Google Re-crawled and requested a re-index of the whole site Fixed a few of the smaller issues If our theory is right and our actions do help, I believe its now a waiting game for Google to re-crawl and reindex. Unfortunately, Search Console is still only showing data from a few days ago, so its hard to tell if there has been any changes in the index. I am happy to wait it out, but you can appreciate that some of snr management are very nervous given the impact of loosing this page and are keen to get a second opinion on the matter. Does the Moz Community have any further ideas or insights on how we can speed up the indexing of the site? Kind regards, Jason
Intermediate & Advanced SEO | | Clickmetrics0 -
Base copy on 1 page, then adding a bit more for another page - potential duplicate content. What to do?
Hi all, We're creating a section for a client that is based on road trips - for example, New York to Toronto. We have a 3 day trip, a 5 day trip, a 7 day trip and a 10 day trip. The 3 day trip is the base, and then for the 5 day trip, we add another couple of stops, for the 7 day trip, we add a couple more stops and then for the 10 day trip, there might be two or three times the number of stops of the initial 3 day trip. However, the base content is similar - you start at New York, you finish in Toronto, you likely go through Niagara on all trips. It's not exact duplicate content, but it's similar content. I'm not sure how to look after it? The thoughts we have are:1) Use canonical tags 3,5,7 day trips to the 10 day trip.
Intermediate & Advanced SEO | | digitalhothouse
2) It's not exactly duplicate content, so just go with the content as it is We don't want to get hit by any penalty for duplicate content so just want to work out what you guys think is the best way to go about this. Thanks in advance!0 -
Consolidating two different domains to point at same site, duplicate content penalty?
I have two websites that are extremely similar and want to consolidate them into one website by pointing both domain names at one website. is this going to cause any duplicate content penalties by having two different domain names pointing at the same site? Both domains get traffic so i don't want to just discontinue one of the domains.
Intermediate & Advanced SEO | | Ron100 -
Ticket Industry E-commerce Duplicate Content Question
Hey everyone, How goes it? I've got a bunch of duplicate content issues flagged in my Moz report and I can't figure out why. We're a ticketing site and the pages that are causing the duplicate content are for events that we no longer offer tickets to, but that we will eventually offer tickets to again. Check these examples out: http://www.charged.fm/mlb-all-star-game-tickets http://www.charged.fm/fiba-world-championship-tickets I realize the content is thin and that these pages basically the same, but I understood that since the Title tags are different that they shouldn't appear to the Goog as duplicate content. Could anyone offer me some insight or solutions to this? Should they be noindexed while the events aren't active? Thanks
Intermediate & Advanced SEO | | keL.A.xT.o1 -
Duplicate content across hundreds of Local sites and they all rank #1
Usually when we discuss duplicate content, we're addressing the topic of penalties or non-indexing. In this case, we're discussing ranking high with duplicate content. I've seen lots of dental, chiropractor and veterinarian sites built by companies that give them cookie cutter sites with the same copy. And they all rank #1 or #2. Here are two companies that do that:
Intermediate & Advanced SEO | | katandmouse
http://www.rampsites.com/rampsites/home_standard.asp?sectionid=4
http://mysocialpractice.com/about/ The later uses external blogs to provide inbound links to their clients' site, but not all services do that, in fact, this is the first time I've seen them with external blogs. Usually the blog with duplicate copy is ON SITE and the sites still rank #1. Query "Why Your Smile Prefers Water Over Soft Drinks" to see duplicate content on external blogs. Or "Remember the Mad Hatter from the childhood classic, Alice in Wonderland? Back then, the process of making hats involved using mercury compounds. Overexposure could produce symptoms referred to as being" for duplicate content on chiropractor sites that rank high. I've seen well optimized sites rank under them even though their sites have just as much quality content and it's all original with more engagement and inbound links. It appears to me that Google is turning a blind eye on duplicate content. Maybe because these are local businesses with local clientele it doesn't care that a chiropractor in NY has the same content as one in CA, just as the visitor doesn't care because the visitor in CA isn't look at a chiropractor's site in NY generally. So maybe geo-targeting the site has something to do with it. As a test, I should take the same copy and put it on a non-geo-targeted site and see if it will get indexed. I asked another Local SEO expert if she has run across this, probably the best in my opinion. She has and she finds it difficult to rank above them as well. It's almost as if Google is favoring those sites. So the question is, should all dentists, chiropractors and veterinarians give it up to these services? I shudder to think that, but, hey it's working and it's a whole lot less work - and maybe expense - for them.0 -
How much (%) of the content of a page is considered too much duplication?
Google is not fond of duplication, I have been very kindly told. So how much would you suggest is too much?
Intermediate & Advanced SEO | | simonberenyi0 -
Duplicate Content, Campaign Explorer & Rel Canonical
Google Advises to use Rel Canonical URL's to advise them which page with similiar information is more relevant. You are supposed to put a rel canonical on the non-preferred pages to point back to the desired page. How do you handle this with a product catalog using ajax, where the additional pages do not exist? An example would be: <colgroup><col width="470"></colgroup>
Intermediate & Advanced SEO | | eric_since1910.com
| .com/productcategory.aspx?page=1 /productcategory.aspx?page=2 /productcategory.aspx?page=3 /productcategory.aspx?page=4 The page=1,2,3 and 4 do not physically exist, they are simply referencing additional products I have rel canonical urls' on the main page www.examplesite.com/productcategory.aspx, but I am not 100% sure this is correct or how else it could be handled. Any Ideas Pro mozzers? |0