As a wholesale website can our independent retailer's website use (copy) our content?
-
As a wholesaler of villa rentals, we have descriptions, images, prices etc can our agents (independent retailers) use the content from our website for their site or will this penalize us or them in Google rankings?
-
Thanks, this is what I would say under normal circumstances but when these websites require a feed to tell them exactly when a villa is booked or not, so as you don't get double bookings don't you think Google may consider this slightly different?
-
Thanks Adam for your reply,
just to give you a bit more info. We have set up an external XML feed with completely different copy to our website, but all the agents websites are having problems with this. Also it's worth noting that the copy on the XML feed is the same and given to around 15 agents websites in identical form. However these websites are not currently being penalized and often rank higher than us. I believe Google views this websites slightly differently, an example of one would be http://www.homeaway.com. If those only websites to be affected are the agents/retailers then this should already be the case as they all have the same copy on their sites. So if it's not going to harm our website I would be inclined to give them the feed straight from website, which would make our lives easier.
-
It is never a good idea for anyone to copy another site's content. Regardless of the connection between companies, I would always advise on creating unique content for both sites. As you are the original creator of the content, you shouldn't face any penalty but the independent retailer could face duplicate content issues, if they copy the content from your site.
I would certainly advise against the independent retailer copying your content. However, it would probably be more beneficial and suitable to have the retailer link to your site instead.
Hope this helps.
-
As long as google credits your site as the original author of the content ie google crawls your site before it crawls the site that copied the content. So yes it will penalize them.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can I Block https URLs using Host directive in robots.txt?
Hello Moz Community, Recently, I have found that Google bots has started crawling HTTPs urls of my website which is increasing the number of duplicate pages at our website. Instead of creating a separate robots.txt file for https version of my website, can I use Host directive in the robots.txt to suggest Google bots which is the original version of the website. Host: http://www.example.com I was wondering if this method will work and suggest Google bots that HTTPs URLs are the mirror of this website. Thanks for all of the great responses! Regards,
Technical SEO | | TJC.co.uk
Ramendra0 -
Can't get Google to Index .pdf in wp-content folder
We created an indepth case study/survey for a legal client and can't get Google to crawl the PDF which is hosted on Wordpress in the wp-content folder. It is linked to heavily from nearly all pages of the site by a global sidebar. Am I missing something obvious as to why Google won't crawl this PDF? We can't get much value from it unless it gets indexed. Any help is greatly appreciated. Thanks! Here is the PDF itself:
Technical SEO | | inboundauthority
http://www.billbonebikelaw.com/wp-content/uploads/2013/11/Whitepaper-Drivers-vs-cyclists-Floridas-Struggle-to-share-the-road.pdf Here is the page it is linked from:
http://www.billbonebikelaw.com/resources/drivers-vs-cyclists-study/0 -
Can Googlebot read the content on our homepage?
Just for fun I ran our homepage through this tool: http://www.webmaster-toolkit.com/search-engine-simulator.shtml This spider seems to detect little to no content on our homepage. Interior pages seem to be just fine. I think this tool is pretty old. Does anyone here have a take on whether or not it is reliable? Should I just ignore the fact that it can't seem to spider our home page? Thanks!
Technical SEO | | danatanseo0 -
Robots.txt crawling URL's we dont want it to
Hello We run a number of websites and underneath them we have testing websites (sub-domains), on those sites we have robots.txt disallowing everything. When I logged into MOZ this morning I could see the MOZ spider had crawled our test sites even though we have said not to. Does anyone have an ideas how we can stop this happening?
Technical SEO | | ShearingsGroup0 -
Https-pages still in the SERP's
Hi all, my problem is the following: our CMS (self-developed) produces https-versions of our "normal" web pages, which means duplicate content. Our it-department put the <noindex,nofollow>on the https pages, that was like 6 weeks ago.</noindex,nofollow> I check the number of indexed pages once a week and still see a lot of these https pages in the Google index. I know that I may hit different data center and that these numbers aren't 100% valid, but still... sometimes the number of indexed https even moves up. Any ideas/suggestions? Wait for a longer time? Or take the time and go to Webmaster Tools to kick them out of the index? Another question: for a nice query, one https page ranks No. 1. If I kick the page out of the index, do you think that the http page replaces the No. 1 position? Or will the ranking be lost? (sends some nice traffic :-))... thanx in advance 😉
Technical SEO | | accessKellyOCG0 -
Webmaster tools lists a large number (hundreds)of different domains linking to my website, but only a few are reported on SEOMoz. Please explain what's going on?
Google's webmaster tools lists hundreds of links to my site, but SEOMoz only reports a few of them. I don't understand why that would be. Can anybody explain it to me? Is there someplace to I can go to alert SEOMoz to this issue?
Technical SEO | | dnfealkoff0 -
Domain Transfer Process / Bulk 301's Using IIS
Hi guys - I am getting ready to do a complete domain transfer from one domain to another completely different domain for a client due to a branding/name change. 2 things - first, I wanted to lay out a summary of my process and see if everyone agrees that its a good approach, and second, my client is using IIS, so I wanted to see if anyone out there knows a bulk tool that can be used to implement 301's on the hundreds of pages that the site contains? I have found the process to redirect each individual page, but over hundreds its a daunting task to look at. The nice thing about the domain transfer is that it is going to be a literal 1:1 transfer, with the only things changing being the logo and the name mentions. Everything else is going to stay exactly the same, for the most part. I will use dummy domain names in the explanation to keep things easy to follow: www.old-domain.com and www.new-domain.com. The client's existing home page has a 5/10 GPR, so of course, transferring Mojo is very important. The process: Clean up existing site 404's, duplicate tags and titles, etc. (good time to clean house). Create identical domain structure tree, changing all URL's (for instance) from www.old-domain.com/freestuff to www.newdomain.com/freestuff. Push several pages to a dev environment to test (dev.new-domain.com). Also, replace all instances of old brand name (images and text) with new brand name. Set up 301 redirects (here is where my IIS question comes in below). Each page will be set up to redirect to the new permanent destination with a 301. TEST a few. Choose lowest traffic time of week (from analytics data) to make the transfer ALL AT ONCE, including pushing new content live to the server for www.new-domain.com and implementing the 301's. As opposed to moving over parts of the site in chunks, moving the site over in one swoop avoids potential duplicate content issues, since the content on the new domain is essentially exactly the same as the old domain. Of course, all of the steps so far would apply to the existing sub-domains as well, IE video.new-domain.com. Check for errors and problems with resolution issues. Check again. Check again. Write to (as many as possible) link partners and inform them of new domain and ask links to be switched (for existing links) and updated (for future links) to the new domain. Even though 301's will redirect link juice, the actual link to the new domain page without the redirect is preferred. Track rank of targeted keywords, overall domain importance and GPR over time to ensure that you re-establish your Mojo quickly. That's it! Ok, so everyone, please give me your feedback on that process!! Secondly, as you can see in the middle of that process, the "implement 301's" section seems easier said than done, especially when you are redirecting each page individually (would take days). So, the question here is, does anyone know of a way to implement bulk 301's for each individual page using IIS? From what I understand, in an Apache environment .htaccess can be used, but I really have not been able to find any info regarding how to do this in bulk using IIS. Any help here would be GREATLY APPRECIATED!!
Technical SEO | | Bandicoot0 -
Do any short url's pass link juice? googles own? twitters?
I've read a few posts saying not shorten links at all but we have a lot to tweet and need to. Is googles shortener the best option? I've considered linking to the category index page the article is on and expect the user to find the article and click on the article, I don't like the experience that creates though. I've considered making the article permalink tiny but I would lose the page title being in the url. Is this the best option?
Technical SEO | | Aviawest0