PDF on financial site that duplicates ~50% of site content
-
I have a financial advisor client who has a downloadable PDF on his site that contains about 9 pages of good info. Problem is much of the content can also be found on individual pages of his site.
Is it best to noindex/follow the pdf? It would be great to let the few pages of original content be crawlable, but I'm concerned about the duplicate content aspect.
Thanks --
-
This is what we have done with pdfs. Assign rel="canonical" in .htaccess.
We did this with a few hundred files and it took google a LONG time to find and credit them.
-
You could set the header to noindex rather than rel=canonical
-
Personally I think it would be better not to index, it but if necessary, the index folder root seems like a good option
-
Thanks. Anybody want to weigh in on where to rel=canonical to? Home page?
-
If you are using apache, you should put it on your .htaccess with this form
<filesmatch “my-file.pdf”="">Header set Link ‘<http: misite="" my-file.html="">; rel=”canonical”‘</http:></filesmatch>
-
I think the right way here is to put the rel canonical in PDF header http://googlewebmastercentral.blogspot.com/2011/06/supporting-relcanonical-http-headers.html
-
I thought the idea was to put rel=canonical on the duplicated page, to signal that "hey, this page may look like duplicate content, but please refer to this canonical URL"?
Looks like there is a pdf option for rel=canonical, I guess the question is, what page on the site to make canonical?
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=139394
Indicate the canonical version of a URL by responding with the
Link rel="canonical"
HTTP header. Addingrel="canonical"
to thehead
section of a page is useful for HTML content, but it can't be used for PDFs and other file types indexed by Google Web Search. In these cases you can indicate a canonical URL by responding with theLink rel="canonical"
HTTP header, like this (note that to use this option, you'll need to be able to configure your server):Link: <http: www.example.com="" downloads="" white-paper.pdf="">; rel="canonical"</http:>
-
Hi Keith,
I'm sorry, I should have clarified. The rel=canonical tags would be on your Web pages, not the PDF (they are irrelevant in a PDF document). Then Google will attribute your Web page as the original source of the content and will understand that the PDF just contains bits of content from those pages. In this instance I would include a rel=canonical tag on every page of your site, just to cover your bases. Hope that helps!
Dana
-
Not sure which page I would mark as being canonical, since the pdf contains content from several different pages on the site. I don't think it's possible to assign different rel=canonical tags to separate portions of a pdf, is it?
-
As long as you have rel=canonical tags properly in place, you don't need to worry about the PDF causing duplicate content problems. That way, any original content should be picked up and any duplicate can be attributed to your existing Web pages. Hope that's helpful!
Dana
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate ecommerce sites, SEO implications & others?
We have an established eCom site built out with custom php, dedicated SERPs, traffic, etc.. The question has arisen on how to extend commerce on social and we have found a solution with Shopify. In order to take advantage of this, we'd need to build out a completely new site in Shopify and would have to have the site live in order to have storefronts on Pinterest and Twitter. Aside from the obvious problem with having two databases, merchant processing, etc, does anyone know whether there are SEO implications to having two live sites with duplicate products? Could we just disavow a Shopify store in Webmaster Tools? Any other thoughts or suggestions? TIA!
Intermediate & Advanced SEO | | PAC31350 -
Duplicate page content errors stemming from CMS
Hello! We've recently relaunched (and completely restructured) our website. All looks well except for some duplicate content issues. Our internal CMS (custom) adds a /content/ to each page. Our development team has also set-up URLs to work without /content/. Is there a way I can tell Google that these are the same pages. I looked into the parameters tool, but that seemed more in-line with ecommerce and the like. Am I missing anything else?
Intermediate & Advanced SEO | | taylor.craig0 -
Duplicate content throughout multiple URLs dilemma
We have a website with lots of categories and there are problems that some subcategories have identical content on them. So, is it enough to just add different text on those problematic subcategories or we need to use "canonical" tag to main category. Same dilemma is with our search system and duplicate content. For example, "/category/sports" URL would have similar to identical content with "/search/sports" and "/search/sports-fitness/" URLs. Ranking factors is important for all different categories and subcategories. Ranking factors is also important for search individual keywords. So, the question is, how to make them somehow unique/different to rank on all those pages well? Would love to hear advices how it can be solved using different methods and how it would affect our rankings. When we actually need to use "canonical" tag and when 301 redirect is better. Thanks!
Intermediate & Advanced SEO | | versliukai0 -
How To Handle Duplicate Content Regarding A Corp With Multiple Sites and Locations?
I have a client that has 800 locations. 50 of them are mine. The corporation has a standard website for their locations. The only thing different is their location info on each page. The majority of the content is the same for each website for each location. What can be done to minimize the impact/penalty of having "duplicate or near duplicate" content on their sites? Assuming corporate won't allow the pages to be altered.
Intermediate & Advanced SEO | | JChronicle0 -
Category Content Duplication
Does indexing category archive page for a blog cause duplications? http://www.seomoz.org/blog/setup-wordpress-for-seo-success After reading this article I am unsure.
Intermediate & Advanced SEO | | SEODinosaur0 -
Migrating a site from a standalone site to a subdivision of large .gov.uk site
The scenario We’ve been asked by a client, a Non-Government Organisation who are being absorbed by a larger government ministry, for help with the SEO of their site. They will be going from a reasonably large standalone site to a small sub-directory on a high authority government site and they want some input on how best to maintain their rankings. They will be going from the Number 1 ranked site in their niche (current site domainRank 59) to being a sub directory on a domainRank 100 site). The current site will remain, but as a members only resource, behind a paywall. I’ve been checking to see the impact that it had on a related site, but that one has put a catch all 302 redirect on it’s pages so is losing the benefit of a it’s historical authority. My thoughts Robust 301 redirect set up to pass as much benefit as possible to the new pages. Focus on rewriting content to promote most effective keywords – would suggest testing of titles, meta descriptions etc but not sure how often they will be able to edit the new site. ‘We have moved’ messaging going out to webmasters of existing linking sites to try to encourage as much revision of linking as possible. Development of link-bait to try and get the new pages seen. Am I going about this the right way? Thanks in advance. Phil
Intermediate & Advanced SEO | | smrs-digital0 -
Wordpress Duplicate Content Due To Allocating Two Post Categories
It looks like google has done a pretty deep crawl of my site and is now showing around 40 duplicate content issues for posts that I have tagged in two seperate categories for example: http://www.musicliveuk.com/latest-news/live-music-boosts-australian-economy http://www.musicliveuk.com/live-music/live-music-boosts-australian-economy I use the all in one SEO pack and have checked the no index for categories, archive, and tag archive boxes so google shouldn't even crawl this content should it? . I guess the obvious answer is to only put each post in one category but I shouldn't have to should I? Some posts are relevant in more than once category.
Intermediate & Advanced SEO | | SamCUK0 -
How do you prevent the mobile site becoming a duplicate of the full browser site?
We have a larger site with 100k+ pages, we need to create a mobile site which gets indexed in the mobile engines but I am afraid that google bot will consider these pages duplicates of the normal site pages. I know I can block it on the robots.txt but I still need it to be indexed for mobile search engines and I think google has a mobile crawler as well. Feel free to give me any other tips that I should follow while trying to optimize the mobile version. Any help would be appreciated 🙂
Intermediate & Advanced SEO | | pulseseo0