Duplication, pagination and the canonical
-
Hi all, and thank you in advance for your assistance.
We have an issue of paginated pages being seen as duplicates by pro.moz crawlers.
The paginated pages do have duplicated by content, but are not duplicates of each other. Rather they pull through a summary of the product descriptions from other landing pages on the site.
I was planing to use rel=canonical to deal with them, however I am concerned as the paginated pages are not identical to each other, but do feature their own set of duplicate content!
We have a similar issue with pages that are not paginated but feature tabs that alter the URL parameters like so:
?st=BlueWidgets
?st=RedSocks
?st=Offers
These are being seen as duplicates of the main URL, and again all feature duplicate content pulled from elsewhere in the site, but are not duplicates of each other. Would a canonical tag be suitable here?
Many Thanks
-
The rel next prev is not for duplicated content - it just shows google how the parts relate to the whole.
An alternative to the rel next prev is the "Classic Pagination for SEO" that uses noindex another article by Adam
http://searchengineland.com/the-latest-greatest-on-seo-pagination-114284
If you have a duplicate issue, this would solve it as you would noindex all the duplicate pages.
What you need to do (and I can't do this for you), is to look at all the crawl paths that you are providing Google. As I mention above, you are not doing any favors to Google or to your site when you show Google an infinite number of paths to get to the same content. It just wastes Google's time and you don't want to do that when Google also has to crawl the rest of the internet. If you solve this issue, you will solve your duplicate issue.
AJ Kohn just posted an article on the concept of crawl budget that talks about this. I think the article is quite good and it explains why we need to look at all the topics of noindex, nofollow, robots, canonical and rel next prev http://www.blindfiveyearold.com/crawl-optimization
-
Thanks CleverPhD,
That's a very interesting read by Adam Audette too, thanks.
I should say that there's no internal search, each tab has a series of duplicated 'blurbs' taken from the product's unique landing page, while the body copy remains the same across the slight variations in the URL. So with:
example.com/example/?st=BlueWidgets
example.com/example/?st=RedSocks
all of these will feature the same body copy, while the last two will have a series of small descriptions from other landing pages in the site. Would the canonical tag be appropriate in this case? We only need to index 'example.com/example'.
Also, does the rel next prev take into account duplicate content? We want only the main URL indexed as all the paginated pages feature duplicate content, there is no view all page however.
Many thanks
-
If I am understanding the question - I think pulling in some body copy from each search result (and not just the whole page) would be fine. I think Google will see that this is a search result and that you are pointing to other pages. You are probably going to pull in text from the title too. This is common practice in search results - heck Google does it!
If you are still concerned about the pulled in descriptions, your option is to setup the system to have an alternate description for each page. Use the alternate description when you pull it into your main page. It is more work, but it will eliminate this issue.
Separately, paginated pages no longer need to be canonicaled to the index page. You can use rel next and prev.
http://googlewebmastercentral.blogspot.com/2011/09/pagination-with-relnext-and-relprev.html
https://support.google.com/webmasters/answer/1663744?hl=en
It explains to Google the relationship between P1 and P2,3,4,5,n etc.
Beyond that, you need to watch that you do not get into too many paginated pages to get to the exact same product pages. Lets say you had 1,000 widgets that were blue, red and green and also were Free, Expensive or Cheap. You would have several sets of paginated pages (one set for Blue, one for Red, Green, Free, Cheap, Expensive, one for Red and Expensive) etc. It gets to be a little crazy as they all lead to the same set of widget product pages. You need to manage how to have Google crawl all that and not have your Paginated Category pages look like duplicated. Adam Audette writes great stuff on this. Look here for things to consider
http://www.rimmkaufman.com/blog/site-search-dynamic-content-and-seo/01032013/
-
Thank you Robert, and for the helpful link.
You did read my question correctly, however I failed to ask it ask entirely correctly. Just to complicate matters, I neglected to mention that there is body copy on each page, which technically will be duplicated.
It sits above the tabs and does not change, while the tabbed pages - under new URL parameters - pull in a sentence or two of product description from elsewhere (a unique landing page).
So,
?st=BlueWidgets
?st=RedSocks
?st=Offers
will all feature the same body copy and different duplicate content. For obvious reasons, we only want the SE to index the main URL.
Any ideas?
Thanks again
-
Hi
It doesn't sound like rel=canonical is the solution, as each one of your pages might feature multiple pieces of content from various other parts of your website (if I've read your question correctly) - so which would be the canonical version of the page?
You could use Parameter Handling in Webmaster Tools to ensure Google knows what to do with your various parameters. Moz doesn't matter here, as long as Search Engines are aware of how to handle your pages correctly.
There's a good overview here.
I hope that's helpful
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content question
Hey Mozzers! I received a duplicate content notice from my Cycle7 Communications campaign today. I understand the concept of duplicate content, but none of the suggested fixes quite seems to fit. I have four pages with HubSpot forms embedded in them. (Only two of these pages have showed up so far in my campaign.) Each page contains a title (Content Marketing Consultation, Copywriting Consultation, etc), plus an embedded HubSpot form. The forms are all outwardly identical, but I use a separate form for each service that I offer. I’m not sure how to respond to this crawl issue: Using a 301 redirect doesn’t seem right, because each page/form combo is independent and serves a separate purpose. Using a rel=canonical link doesn’t seem right for the same reason that a 301 redirect doesn’t seem right. Using the Google Search Console URL Parameters tool is clearly contraindicated by Google’s documentation (I don’t have enough pages on my site). Is a meta robots noindex the best way to deal with duplicate content in this case? Thanks in advance for your help. AK
Technical SEO | | AndyKubrin0 -
Duplicate content question...
I have a high duplicate content issue on my website. However, I'm not sure how to handle or fix this issue. I have 2 different URLs landing to the same page content. http://www.myfitstation.com/tag/vegan/ and http://www.myfitstation.com/tag/raw-food/ .In this situation, I cannot redirect one URL to the other since in the future I will probably be adding additional posts to either the "vegan" tag or the "raw food tag". What is the solution in this case? Thank you
Technical SEO | | myfitstation0 -
Https Duplicate Content
My previous host was using shared SSL, and my site was also working with https which I didn’t notice previously. Now I am moved to a new server, where I don’t have any SSL and my websites are not working with https version. Problem is that I have found Google have indexed one of my blog http://www.codefear.com with https version too. My blog traffic is continuously dropping I think due to these duplicate content. Now there are two results one with http version and another with https version. I searched over the internet and found 3 possible solutions. 1 No-Index https version
Technical SEO | | RaviAhuja
2 Use rel=canonical
3 Redirect https versions with 301 redirection Now I don’t know which solution is best for me as now https version is not working. One more thing I don’t know how to implement any of the solution. My blog is running on WordPress. Please help me to overcome from this problem, and after solving this duplicate issue, do I need Reconsideration request to Google. Thank you0 -
Should I use canonical?
I'm working on a site that sells audio tracks, the site is a Wordpress build. I've got Yoast and XML Sitemaps running for SEO. The site has been developed (not by myself) to use a flash based audio player. Now this player offers the ability to share, sell products etc... The player has been placed on the homepage and the main music catalog page. The main catalog page has had a custom page type created for itself. This page has been created in such a way that if you visit the actual page from dashboard > Pages and add content then no content will appear on the page. Even the page header is pulled from the PHP. So really as far as I am aware no real content is being seen on the page by a search engine. Except the content on the side bars (it has 2 sidebars on either side of the page.) The homepage has an introductory paragraph and header which are editable via the normal method in Wordpress. A custom post type has been created specifically for music items. When a music item is uploaded it is added to the music item feed on the homepage and music catalog pages. It also creates a separate post for the item itself. These items at the moment also have 'no content' as they are only sidebars with a flash music player. I've started to add short paragraphs and headers to them so there is content on the music item posts. I cannot however, in the time frame/budget start entering deeply descriptive content about each item. (I considered adding the intro paragraph from the homepage and using a canonical tag to the homepage on every music item). So here is my question. What do I do with these music items? Do I use canonical and point them toward the music catalog or the homepage? If so which one? I want the homepage or music catalog page to rank well and I am concerned that search engines aren't going to see these most vital parts of the site. I don't think individual items ranking is helpful, so what do i do?!?! The home and catalog pages are the two main pages of the site. I am going to advise a new player, page type etc... be utilised but at the moment I need a solution quickly. Any help will be much appreciated.
Technical SEO | | benyamin0 -
I have a ton of "duplicated content", "duplicated titles" in my website, solutions?
hi and thanks in advance, I have a Jomsocial site with 1000 users it is highly customized and as a result of the customization we did some of the pages have 5 or more different types of URLS pointing to the same page. Google has indexed 16.000 links already and the cowling report show a lot of duplicated content. this links are important for some of the functionality and are dynamically created and will continue growing, my developers offered my to create rules in robots file so a big part of this links don't get indexed but Google webmaster tools post says the following: "Google no longer recommends blocking crawler access to duplicate content on your website, whether with a robots.txt file or other methods. If search engines can't crawl pages with duplicate content, they can't automatically detect that these URLs point to the same content and will therefore effectively have to treat them as separate, unique pages. A better solution is to allow search engines to crawl these URLs, but mark them as duplicates by using the rel="canonical" link element, the URL parameter handling tool, or 301 redirects. In cases where duplicate content leads to us crawling too much of your website, you can also adjust the crawl rate setting in Webmaster Tools." here is an example of the links: | | http://anxietysocialnet.com/profile/edit-profile/salocharly http://anxietysocialnet.com/salocharly/profile http://anxietysocialnet.com/profile/preferences/salocharly http://anxietysocialnet.com/profile/salocharly http://anxietysocialnet.com/profile/privacy/salocharly http://anxietysocialnet.com/profile/edit-details/salocharly http://anxietysocialnet.com/profile/change-profile-picture/salocharly | | so the question is, is this really that bad?? what are my options? it is really a good solution to set rules in robots so big chunks of the site don't get indexed? is there any other way i can resolve this? Thanks again! Salo
Technical SEO | | Salocharly0 -
How do i deal with duplicate content on the same domain?
I'm trying to find out if there's a way we can combat similar content on different pages on the same site, without having to re write the whole lot? Any ideas?
Technical SEO | | indurain0 -
Duplicate content
Greetings! I have inherited a problem that I am not sure how to fix. The website I am working on had a 302 redirect from its original home url (with all the link juice) to a newly designed page (with no real link juice). When the 302 redirect was removed, a duplicate content problem remained, since the new page had already been indexed by google. What is the best way to handle duplicate content? Thanks!
Technical SEO | | shedontdiet0 -
The Bible and Duplicate Content
We have our complete set of scriptures online, including the Bible at http://lds.org/scriptures. Users can browse to any of the volumes of scriptures. We've improved the user experience by allowing users to link to specific verses in context which will scroll to and highlight the linked verse. However, this creates a significant amount of duplicate content. For example, these links: http://lds.org/scriptures/nt/james/1.5 http://lds.org/scriptures/nt/james/1.5-10 http://lds.org/scriptures/nt/james/1 All of those will link to the same chapter in the book of James, yet the first two will highlight the verse 5 and verses 5-10 respectively. This is a good user experience because in other sections of our site and on blogs throughout the world webmasters link to specific verses so the reader can see the verse in context of the rest of the chapter. Another bible site has separate html pages for each verse individually and tends to outrank us because of this (and possibly some other reasons) for long tail chapter/verse queries. However, our tests indicated that the current version is preferred by users. We have a sitemap ready to publish which includes a URL for every chapter/verse. We hope this will improve indexing of some of the more popular verses. However, Googlebot is going to see some duplicate content as it crawls that sitemap! So the question is: is the sitemap a good idea realizing that we can't revert back to including each chapter/verse on its own unique page? We are also going to recommend that we create unique titles for each of the verses and pass a portion of the text from the verse into the meta description. Will this perhaps be enough to satisfy Googlebot that the pages are in fact unique? They certainly are from a user perspective. Thanks all for taking the time!
Technical SEO | | LDS-SEO0