Why does SEOmoz bot see duplicate pages despite I am using the canonical tag?
-
Hello here,
today SEOmoz bot found and marked as "duplicate content" the following pages on my website:
http://www.virtualsheetmusic.com/score/PatrickCollectionFlPf.html?tab=mp3
http://www.virtualsheetmusic.com/score/PatrickCollectionFlPf.html?tab=pdf
And I am wondering why considering the fact I am using on both those pages a canonical tag pointing to the main product page below:
http://www.virtualsheetmusic.com/score/PatrickCollectionFlPf.html
Shouldn't SEOmoz bot follow the canonical directive and not report those two pages as duplicate?
Thank you for any insights I am probably missing here!
-
Thank you Peter, I got your ticket reply.
That makes perfect sense, and as Dr. Peter pointed out on a different thread:
http://www.seomoz.org/q/why-seomoz-bot-consider-these-as-duplicate-pages
I was discussing this issue further, I was confused by your report.
Thank you again for your help and I hope you will improve your report interface to avoid such confusion related issues in the future.
Best,
Fabrizio
-
Hi there,
Thanks for reaching out to us, I replied to you in a support ticket, but I just wanted to share it everyone since I think it might be relevant to this discussion.
I looked into your campaign and it seems that this is happening because of where your canonical tags are pointing, you can see the duplicate pages by clicking on the number to the right side of the link. These pages are considered duplicates because their canonical tags point to different URLs. For example:
http://www.virtualsheetmusic.com/score/PatrickCollectionFlPf.html?tab=mp3(Duplicate 1) is considered a duplicate of
http://www.virtualsheetmusic.com/score/PatrickCollectionVcPf.html?tab=mp3 (Duplicate 2)because the canonical tag for the first page is CANON1(http://screencast.com/t/tqvDZrLsyz8D) while the canonical for the second URL is CANON2 (http://screencast.com/t/FOguPJmK0).
Since the canonical tags point to different pages it is assumed that CANON1 and CANON2 are likely to be duplicates themselves.
Here is how our system interprets duplicate content vs. rel canonical:
Assuming A, B, C, and D are all duplicates,
If A references B as the canonical, then they are not considered duplicates
If A and B both reference C as canonical, A and B are not considered duplicates of each other
If A references C as a canonical, A and B are considered duplicated
If A references C as canonical, B references D, then A and B are considered duplicates
The examples you've provided actually fall into the fourth example I've listed above.Hope that helps,
Best,
Peter
SEOmoz Help Team. -
Thinking furthermore, I don't see how these pages can be considered nearly duplicate since their content is quite different:
http://www.virtualsheetmusic.com/score/PatrickCollectionFlPf.html?tab=mp3
http://www.virtualsheetmusic.com/score/PatrickCollectionFlPf.html?tab=pdf
Thoughts??!!
-
Nobody can tell me why SEOmoz ignore my canonical tag definitions? According to some comments on the following thread:
http://www.seomoz.org/blog/visualizing-duplicate-web-pages
It should actually ignore pages with a canonical tag and NOT mark them as duplicate, but in my experience (as explained above), that's not been the case.
-
Ok, thank you, now I get the point... then here is my next question: is there a way to tell SEOmoz bot to ignore duplicate page with a defined canonical tag? If not, the SEOmoz duplicate page report is useless for me. I am not interested to know about duplicate page for which I have already defined a canonical tag for.
Thanks!
-
Canonical lets you pick which of the duplicates will be indexed. But Google still has to crawl the other pages when they could be crawling other parts of your site. It's an opportunity cost. If you can accept slower crawls, you can ignore the issue.
-
I am sorry, but I don't understand your point. If two pages are similar, we can use the canonical tag to "consolidate" them and avoid duplicate issues. Am I right? Or what are canonical tags for?
-
While I agree that SEOMOZ should better categorize duplicates that are canonical, the reason they still tell you it's duplicate is crawl budget. Remember, Google still has to crawl these duplicate pages and they could be crawling something else instead. Canonical only helps by letting you pick which duplicate content gets indexed. It's better to not have duplicate content than to have canonical duplicates.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Question on Indexing, Hreflang tag, Canonical
Dear All, Have a question. We've a client (pharma), who has a prescription medicine approved only in the US, and has only one global site at .com which is accessed by all their target audience all over the world.
Intermediate & Advanced SEO | | jrohwer
For the rest of the US, we can create a replica of the home page (which actually features that drug), minus the existence of the medicine, and set IP filter so that non-US traffic see the duplicate of the home page. Question is, how best to tackle this semi-duplicate page. Possibly no-index won't do because that will block the site from the non-US geography. Hreflang won't work here possibly, because we are not dealing different languages, we are dealing same language (En) but different Geographies. Canonical might be the best way to go? Wanted to have an insight from the experts. Thanks,
Suparno (for Jeff)1 -
Should I be using meta robots tags on thank you pages with little content?
I'm working on a website with hundreds of thank you pages, does it make sense to no follow, no index these pages since there's little content on them? I'm thinking this should save me some crawl budget overall but is there any risk in cutting out the internal links found on the thank you pages? (These are only standard site-wide footer and navigation links.) Thanks!
Intermediate & Advanced SEO | | GSO0 -
URLs: Removing duplicate pages using anchor?
I've been working on removing duplicate content on our website. There are tons of pages created based on size but the content is the same. The solution was to create a page with 90% static content and 10% dynamic, that changed depending on the "size" Users can select the size from a dropdown box. So instead of 10 URLs, I now have one URL. Users can access a specific size by adding an anchor to the end of the URL (?f=suze1, ?f=size2) For e.g: Old URLs. www.example.com/product-alpha-size1 www.example.com/product-alpha-size2 www.example.com/product-alpha-size3 www.example.com/product-alpha-size4 www.example.com/product-alpha-size5 New URLs www.example.com/product-alpha-size1 www.example.com/product-alpha-size1?f=size2 www.example.com/product-alpha-size1?f=size3 www.example.com/product-alpha-size1?f=size4 www.example.com/product-alpha-size1?f=size5 Do search engines read the anchor or drop them? Will the rank juice be transfered to just www.example.com/product-alpha-size1?
Intermediate & Advanced SEO | | Bio-RadAbs0 -
Should I use individual product pages for different formats of the same product?
Hi All -- I'm working with a publishing client who is launching a new site. They have a large product catalogue offered in a number of format types (print, ebook, online learning, packages) with each one possessing a unique ISBN code. From past experience, I know that ISBN codes can be a really important ranking factor. We are currently trying to sort out product page guidelines. The proposed methods are: A single product page for all formats. The user then has the option to select which format they wish to purchase. The page would contain all key descriptors for each format, including: individual ISBN, format, title, price, author, etc. We would then use schema mark-up just to assist search engines with understanding and crawling. BUT we worry that the single page won't rank as well as say an invidual product page with a unique ISBN in the URL (for example: http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470573325.html) Which leads to the next option... Individual URLs for each format. We understand that most e-commerce guidelines state you shouldn't dilute link equity amongst multiple pages with very similar products and descriptions. BUT we want searchers to be able to search by individual ISBN and still find that specific format within the SERPs. This seems to rule out canonicalizing, because we don't prefer one format over the other and still want say the ebook to show up as much as the print version. If anyone has any other options or considerations that we haven't thought about, it would be greatly appreciated. Thanks, U
Intermediate & Advanced SEO | | HarborOneBank0 -
Is it okay to copy and paste on page content into the meta description tag?
I have heard conflicting answers to this. I always figured that it was okay to selectively copy and paste on page content into the meta description tag.....especially if the onpage content is well written. How can it be duplicate content if it's pulling from the exact same page? Does anybody have any feedback from a credible source about this? Thanks.
Intermediate & Advanced SEO | | VanguardCommunications1 -
Category Pages For Distributing Authority But Not Creating Duplicate Content
I read this interesting moz guide: http://moz.com/learn/seo/robotstxt, which I think answered my question but I just want to make sure. I take it to mean that if I have category pages with nothing but duplicate content (lists of other pages (h1 title/on-page description and links to same) and that I still want the category pages to distribute their link authority to the individual pages, then I should leave the category pages in the site map and meta noindex them, rather than robots.txt them. Is that correct? Again, don't want the category pages to index or have a duplicate content issue, but do want the category pages to be crawled enough to distribute their link authority to individual pages. Given the scope of the site (thousands of pages and hundreds of categories), I just want to make sure I have that right. Up until my recent efforts on this, some of the category pages have been robot.txt'd out and still in the site map, while others (with different url structure) have been in the sitemap, but not robots.txt'd out. Thanks! Best.. Mike
Intermediate & Advanced SEO | | 945010 -
Canonical Tag for Pages with Less Content
I am considering using a cross-domain canonical tag for pages that are very similar but one has less content than the other. The domains are geo specific, so for example. www.page.com - with content xxx, yyy, zzz, and www.page.fr with content xxx is this a problem because while there is clearly duplicate content here the pages are not actually significantly similar since there is so much less content on one page than the other?
Intermediate & Advanced SEO | | theLotter0 -
Duplicate Content From Indexing of non- File Extension Page
Google somehow has indexed a page of mine without the .html extension. so they indexed www.samplepage.com/page, so I am showing duplicate content because Google also see's www.samplepage.com/page.html How can I force google or bing or whoever to only index and see the page including the .html extension? I know people are saying not to use the file extension on pages, but I want to, so please anybody...HELP!!!
Intermediate & Advanced SEO | | WebbyNabler0