Copying Content With Permission
-
Hi, we received an email about a guy who wants to copy and paste our content on his website, he says he will keep all the links we put there and give us full credit for it, so besides keeping all the links on the page, which is the best way for him to give us the credit? a link to the original article? an special meta tag? what?
Thank you
PS.Our site its much more authorative than his and we get indexed within 10min from the moment we publish a page, so I don't worry about him out raking us with our own content.
-
Very controversial...duplicate content...
-
Syndication Source and and Original Source are both generally used for Google News algo at this point. For the main SERPs you would use a cross-domain rel="canonical". The problem with all of these is that they require the re-publisher to edit their html header file on a per-content basis. That is not technologically scalable for many sites so it could kill the deal. If they are willing to give you the rel canonical tag pointing to your domain, that is best (especially if the story includes links to your site). Otherwise, getting your site indexed first and making sure their links to your site int he copy are followable should do the trick.
Don't let them publish every single story you write though. You want readers to have a reason to come subscribe to your site if they read something on the other site.
-
Thanks Matt, that's great stuff! I always keep track of what gets indexed. And yes, choosing who to share the content with is for sure very important, I would not want a content farm related to our site in any way, specially now
-
Hi Andres,
As long as you're getting direct followed links back to your original article, then that should be enough. A couple of other things though:
- Even though you're confident you'll be indexed before the other site, I'd still implement some embargo time on when they can publish on their site as a fallback.
- Take a look at the site itself that will be linking to you... is it something you a) want your content associated with, and b) want your link profile associated with?
Some resources you may be interested in:
[1] http://www.seomoz.org/blog/whiteboard-friday-content-technology-licensing
[2] http://googlewebmastercentral.blogspot.com/2006/12/deftly-dealing-with-duplicate-content.html (deals with syndication)
[3] http://www.mattcutts.com/blog/duplicate-content-question/
-
If this happens often you should consider using http://www.tynt.com/ and modify your attribution settings to suit your needs.
-
I have not tested the "syndication-source" or "original-source" tags personally but I have seen a very good case of credit syndication being used at http://www.privatecloud.com
Almost 95% of the content on this website is duplicate word for word of the original article located on the third party websites. I have been tracking this site for almost 6 months now and have seen several instances of duplicate pages (with credit to original article) indexed and ranking on Google SERPs.
Using this example I would agree that your technique should work fine.
-
Hi Sameer, I am not sure about using a canonical tag since its not our site and maybe there will be more content than just ours, he ask permission just to copy and paste so yes its dupe and we wanted index for the backlinks, this is my idea:
http://googlenewsblog.blogspot.com/2010/11/credit-where-credit-is-due.html
syndication-source indicates the preferred URL for a syndicated article. If two versions of an article are exactly the same, or only very slightly modified, we're asking publishers to use syndication-source to point us to the one they would like Google News to use. For example, if Publisher X syndicates stories to Publisher Y, both should put the following metatag on those articles:
let me know what you think.
-
Hey Andrés,
As a general rule, content is considered duplicate only if it is more than 35-40% copy of the original. If the person wants to copy your website word for word then here are the few ways you can avoid duplicate content penalty
- Rel canonical - Add a rel canonical tag to the section of the non-canonical page. This will inform Google on what page is the most relevant to be indexed (your webpages in this case).
2. Reduce duplication - Ask the person to modify the content and rewrite in their own words. DupeCop is a good tool that will allow you to compare two content pieces and measure the duplication percentage. (Don't use respun content always rewrite in your own words.)
3. NoIndex Meta Robot tags - If they are not willing to change the page content then you can ask them to prevent those pages getting spidered by adding a noindex meta tags.
Best
Sameer
-
So the best way to get the credit from the article are just the links? is there any special tag? something like meta name=syndication-source? no need?
And yes, you are right its manual syndication and he will keep all the links.
thank you Gianluca
-
Hi...
what you describe is somehow a sort of syndication of your content. A manual one, but still a syndication.
I believe that the guy, when he says he will give you full credit for the content, was meaning an optimized full link to the original article.
If it is so, I would say yes to that guy. If not, ask him to do it.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content, although page has "noindex"
Hello, I had an issue with some pages being listed as duplicate content in my weekly Moz report. I've since discussed it with my web dev team and we decided to stop the pages from being crawled. The web dev team added this coding to the pages <meta name='robots' content='max-image-preview:large, noindex dofollow' />, but the Moz report is still reporting the pages as duplicate content. Note from the developer "So as far as I can see we've added robots to prevent the issue but maybe there is some subtle change that's needed here. You could check in Google Search Console to see how its seeing this content or you could ask Moz why they are still reporting this and see if we've missed something?" Any help much appreciated!
Technical SEO | | rj_dale0 -
Duplicate content : domain alias issue
Hello there ! Let's say my client has 2 webshops (that exists since long time, so many backlinks & good authority on both) : individuals.nl : for individuals (has 200 backlinks, let's say) pros.nl : exact same products, exact same content, but with a different branding intended to professionnals (has 100 backlinks, let's say) So, both websites are 99% identical and it has to remain like that !!! Obviously, this creates duplicate content issues. Goal : I want "individuals.nl" to get all ranking value (while "pros.nl" should remain accessible through direct access & appear on it's own brand queries). Solution ? Implement canonical tags on "pros**.nl**" that goes to "individuals.nl". That way, "individuals.nl" will get all ranking value, while "pros.nl" will still be reachable through direct access. However, "individuals.nl" will then replace "pros.nl" from SERP in the long-term. The only thing I want is to keep "pros.nl" visible for its own brand queries -> it won't be possible through organic search result, so, I'm just gonna buy those "pros" queries through paid search ! Put links on all pages of pros.nl to individuals.nl (but not the other way around), so that "pros.nl" will pass some ranking value to "individuals.nl" (but only a small part of the ranking value -> ideally, I would like to pass all link value to this domain). Could someone advise me ??? (I know it sound a bit complicated... but I don't have much choice ^^)
Technical SEO | | Netsociety0 -
Duplicate content and canonicalization confusion
Hello, http://bit.ly/1b48Lmp and http://bit.ly/1BuJkUR pages have same content and their canonical refers to the page itself. Yet, they rank in search engines. Is it because they have been targeted to different geographical locations? If so, still the content is same. Please help me clear this confusion. Regards
Technical SEO | | IM_Learner0 -
Tags, Categories, & Duplicate Content
Looking for some advice on a duplicate content issue that we're having that definitely isn't unique to us. See, we are allowing all our tag and category pages, as well as our blog pagination be indexed and followed, but Moz is detecting that all as duplicate content, which is obvious since it is the same content that is on our blog posts. We've decided in the past to keep these pages the way they are as it hasn't seemed to hurt us specifically and we hoped it would help our overall ranking. We haven't seen positive or negative signals either way, just the warnings from Moz. We are wondering if we should noindex these pages and if that could cause a positive change, but we're worried it might cause a big negative change as well. Have you confronted this issue? What did you decide and what were the results? Thanks in advance!
Technical SEO | | bradhodson0 -
Duplicate Content - Mobile Site
We think that a mobile version of our site is causing a duplicate content issue; what's the best way to stop the mobile version being indexed. Basically the site forwards mobile users to "/mobile" which is just a mobile optimised version of the original site. Is it best to block the /mobile folder from being crawled?
Technical SEO | | nsmith7870 -
Duplicate page content
hi I am getting an duplicate content error in SEOMoz on one of my websites it shows http://www.exampledomain.co.uk http://www.exampledomain.co.uk/ http://www.exampledomain.co.uk/index.html how can i fix this? thanks darren
Technical SEO | | Bristolweb0 -
Duplicate Content Issue
Very strange issue I noticed today. In my SEOMoz Campaigns I noticed thousands of Warnings and Errors! I noticed that any page on my website ending in .php can be duplicated by adding anything you want to the end of the url, which seems to be causing these issues. Ex: Normal URL - www.example.com/testing.php Duplicate URL - www.example.com/testing.php/helloworld The duplicate URL displays the page without the images, but all the text and information is present, duplicating the Normal page. I Also found that many of my PDFs seemed to be getting duplicated burried in directories after directories, which I never ever put in place. Ex: www.example.com/catalog/pdfs/testing.pdf/pdfs/another.pdf/pdfs/more.pdfs/pdfs/ ... when the pdfs are only located in a pdfs directory! I am very confused on how to fix this problem. Maybe with some sort of redirect?
Technical SEO | | hfranz0 -
The Bible and Duplicate Content
We have our complete set of scriptures online, including the Bible at http://lds.org/scriptures. Users can browse to any of the volumes of scriptures. We've improved the user experience by allowing users to link to specific verses in context which will scroll to and highlight the linked verse. However, this creates a significant amount of duplicate content. For example, these links: http://lds.org/scriptures/nt/james/1.5 http://lds.org/scriptures/nt/james/1.5-10 http://lds.org/scriptures/nt/james/1 All of those will link to the same chapter in the book of James, yet the first two will highlight the verse 5 and verses 5-10 respectively. This is a good user experience because in other sections of our site and on blogs throughout the world webmasters link to specific verses so the reader can see the verse in context of the rest of the chapter. Another bible site has separate html pages for each verse individually and tends to outrank us because of this (and possibly some other reasons) for long tail chapter/verse queries. However, our tests indicated that the current version is preferred by users. We have a sitemap ready to publish which includes a URL for every chapter/verse. We hope this will improve indexing of some of the more popular verses. However, Googlebot is going to see some duplicate content as it crawls that sitemap! So the question is: is the sitemap a good idea realizing that we can't revert back to including each chapter/verse on its own unique page? We are also going to recommend that we create unique titles for each of the verses and pass a portion of the text from the verse into the meta description. Will this perhaps be enough to satisfy Googlebot that the pages are in fact unique? They certainly are from a user perspective. Thanks all for taking the time!
Technical SEO | | LDS-SEO0