Duplicate Content errors - not going away with canonical
-
I am getting Duplicate Content Errors reported by Moz on search result pages due to parameters. I went through the document on resolving Duplicate Content errors and implemented the canonical solution to resolve it. The canonical in the header has been in place for a few weeks now and Moz is still showing the pages as Duplicate Content despite the canonical reference. Is this a Moz bug?
http://mathematica-mpr.com/news/?facet={81C018ED-CEB9-477D-AFCC-1E6989A1D6CF}
-
The problem is that you are implementing a canonical from all those parametrized pages to http://mathematica-mpr.com/news/, and the content in that page -an empty result list- is not the same as in the original link you provide in your post -which show article results.
Canonical is used to show a preferred URL for a page, when there are two or more URLs which lead to the same content. After implementing the canonical, you have to make sure you are not linking to the "bad" url or it will never disappear from Google's SERP.
As your content is not the same on both pages, and the old URL is still accessible from your site, you are not fullfill the requirements to make those URL disappear.
I think that if you don't want those index to appear on the SERPs as depending on the search options it will lead to very common pages, and it is almost impossible to identify clearly a canonical reference for them, the best option is to add to robots.txt a line which blocks those parameters, and you will end with that problems.
Disallow: /news/?facet=*
This will remove your search results from the index, ending with duplication problems. Please make sure this is what you want, or if you prefer to keep those results although you continue having duplication issues.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I have a duplicate content on my Moz crawler, but google hasn't indexed those pages: do I still need to get rid of the tags?
I received an urgent error from the Moz crawler that I have duplicate content on my site due to the tags I have. For example: http://www.1forjustice.com/graves-amendment/ The real article found here: http://www.1forjustice.com/car-accident-rental-car/ I didn't think this was a big deal, because when I looked at my GWT these pages weren't indexed (picture attached). Question: should I bother fixing this from an SEO perspective? If Google isn't indexing the pages, then am I losing link juice? 6c2kxiZ
Moz Pro | | Perenich0 -
What's my best strategy for Duplicate Content if only www pages are indexed?
The MOZ crawl report for my site shows duplicate content with both www and non-www pages on the site. (Only the www are indexed by Google, however.) Do I still need to use a 301 redirect - even if the non-www are not indexed? Is rel=canonical less preferable, as usual? Facts: the site is built using asp.net the homepage has multiple versions which use 'meta refresh' tags to point to 'default.asp'. most links already point to www Current Strategy: set the preferred domain to 'www' in Google's Webmaster Tools. set the Wordpress blog (which sits in a /blog subdirectory) with rel="canonical" to point to the www version. Ask programmer to add 301 redirects from the non-www pages to the www pages. Ask programmer to use 301 redirects as opposed to meta refresh tags & point all homepage versions to www.site.org. Does this strategy make the most sense? (Especially considering the non-indexed but existent non-www pages.) Thanks!!
Moz Pro | | kimmiedawn0 -
How do i fix these errors?
Hi How do i fix these errors? (attached) I dont understand them and dont see any instructions to fixing them.. Regards Chris capturefpa.jpg
Moz Pro | | chrisleonard0 -
Canonical URLs and Duplicate Page Content
My website (doctor directory) is getting a lot of duplicate page content & duplicate page title warnings from SEOmoz. The pages that are getting the warnings are doctors profiles which can be accessed at three different URLs. Problem is this should be handled by the canonical tag on the pages. So example below, all three open the same page: https://www.arzttermine.de/arzt/dr-sara-danesh/ https://www.arzttermine.de/arzt/dr-sara-danesh/gkv https://www.arzttermine.de/arzt/dr-sara-danesh/pkv Here's our canonical tag (on line 34): rel="canonical" href="http://www.arzttermine.de/arzt/dr-sara-danesh" /> So why is SEO moz crawling the page? We are getting hundreds of errors from this - and yet Google doesn't have any of the duplicate URLs indexed...
Moz Pro | | thomashillard0 -
I have another Duplicate page content Question to ask.Why does my blog tags come up as duplicates when my page gets crawled,how do I fix it?
I have a blog linked to my web page.& when rogerbot crawls my website it considers tags for my blog pages duplicate content.is there any way I can fix this? Thanks for your advice.
Moz Pro | | PCTechGuy20120 -
Why are these pages considered duplicate page content?
A recent crawl diagnostic for a client's website had several new duplicate page content errors. The problem is, I'm not sure where the error comes from since the content in the webpage is different from one another. Here's the pages that SEOMOZ reported to have duplicate page content errors: http://www.imaginet.com.ph/wireless-internet-service-providers-term http://www.imaginet.com.ph/antivirus-term http://www.imaginet.com.ph/berkeley-internet-name-domain http://www.imaginet.com.ph/customer-premises-equipment-term The only thing similar that I see is the headline which says "Glossary Terms Used in this Site" - I hope that the one sentence is the reason for the error. Any input is appreciated as I want to find out the best solution for my client's website errors. Thanks!
Moz Pro | | TheNorthernOffice790 -
What causes Crawl Diagnostics Processing Errors in seomoz campaign?
I'm getting the following error when seomoz tries to spider my site: First Crawl in Progress! Processing Issues for 671 pages Started: Apr. 23rd, 2011 Here is the robots.txt data from the site: Disallow ALL BOTS for image directories and JPEG files. User-agent: * Disallow: /stats/ Disallow: /images/ Disallow: /newspictures/ Disallow: /pdfs/ Disallow: /propbig/ Disallow: /propsmall/ Disallow: /*.jpg$ Any ideas on how to get around this would be appreciated 🙂
Moz Pro | | cmaddison0