Duplicate Content, Campaign Explorer & Rel Canonical
-
Google Advises to use Rel Canonical URL's to advise them which page with similiar information is more relevant.
You are supposed to put a rel canonical on the non-preferred pages to point back to the desired page.
How do you handle this with a product catalog using ajax, where the additional pages do not exist? An example would be:
<colgroup><col width="470"></colgroup>
|.com/productcategory.aspx?page=1
/productcategory.aspx?page=2
/productcategory.aspx?page=3
/productcategory.aspx?page=4
The page=1,2,3 and 4 do not physically exist, they are simply referencing additional products
I have rel canonical urls' on the main page www.examplesite.com/productcategory.aspx, but I am not 100% sure this is correct or how else it could be handled.
Any Ideas Pro mozzers?
|
-
Hoping for the extra points here, so here goes how we handled the problem.
The solution is a lot simpler than all the thought needed, or used by us to rationalize the answer, and it has been working beautifully. Our Google webmaster tools show constant improvement weekly without us doing any additional work.
We Canonicalized the first paginated results. ".com/productcategory.aspx" not ".com/productcategory.aspx?page=1" so now all pages link their juice back to the main page ".com/productcategory.aspx". GWT doesn't even care about setting the parameter "Page=" And we didn't include the Canonical tag for the product pages, because each product has its own link juice that we want to preserve, so that each product may "Star" as its own as a google result.
So all we did was to include a single canonical tag for the paginated category page and wala the hole solution works in GWT.
-
Hi Christian,
It's likely a setting in the Platinum SEO plugin related to canonical URLs. The notice from SEOmoz is just a notice letting you know they are there, and not an error.
If you have specific questions (like if your plugin and template is set up correctly), I suggest starting a new question thread and including your site's URL.
-
Hi! I have no idea what any of this means. I have "14 rel canonical urls" and I have never entered a "rel canonical url" on on any of my templates (are you talking about the css?)
I use "platinum seo" plugin and the sites that have the most of these errors seem to be the ones I'm using the "Socrates" template.
Help!
-
Hi! We're going through some of the older unanswered questions and seeing if people still have questions or if they've gone ahead and implemented something and have any lessons to share with us. Can you give an update, or mark your question as answered?
Thanks!
-
Damien,
I guess I was not very clear; basically I am suggesting the same thing as you in that you would use the same variable that stores the URL to pull in the canonical URL.
-
@elephantseo Would that not just point every product page to one URL if using one template?
Could you not add the canonical tag exactly the same way as you would add a unique Title or Description (that's if you have the ability for that)?
You'd have to have a database entry and when one of the particular pages loads you'd pull the desired canonical tag from the database for that 'page'.
EDIT - If you create a seperate template for each type of product then go with the template canonical URL
-
Add the rel canonical to your template so that whenever the ajax creates the new URL it already has the rel canonical pointing to the preferred page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content across domains?
Does anyone have suggestions for managing duplicate product/solution website content across domains? (specifically parent/child company domains) Is it advisable to do this? Will it hurt either domain? Any best practices when going down this path?
Intermediate & Advanced SEO | | pilgrimquality0 -
Duplicate content - how to diagnose duplicate content from another domain before publishing pages?
Hi, 🙂 My company is having new distributor contract, and we are starting to sell products on our own webshop. Bio-technology is an industry in question and over 1.000 products. Writing product description from scratch would take many hours. The plan is to re-write it. With permission from our contractors we will import their 'product description' on our webshop. But, I am concerned being penalies from Google for duplicate content. If we re-write it we should be fine i guess. But, how can we be sure? Is there any good tool for comparing only text (because i don't want to publish the pages to compare URLs)? What else should we be aware off beside checking 'product description' for duplicate content? Duplicate content is big issue for all of us, i hope this answers will be helpful for many of us. Keep it hard work and thank you very much for your answers, Cheers, Dusan
Intermediate & Advanced SEO | | Chemometec0 -
Is a Rel Canonical Sufficient or Should I 'NoIndex'
Hey everyone, I know there is literature about this, but I'm always frustrated by technical questions and prefer a direct answer or opinion. Right now, we've got recanonicals set up to deal with parameters caused by filters on our ticketing site. An example is that this: http://www.charged.fm/billy-joel-tickets?location=il&time=day relcanonicals to... http://www.charged.fm/billy-joel-tickets My question is if this is good enough to deal with the duplicate content, or if it should be de-indexed. Assuming so, is the best way to do this by using the Robots.txt? Or do you have to individually 'noindex' these pages? This site has 650k indexed pages and I'm thinking that the majority of these are caused by url parameters, and while they're all canonicaled to the proper place, I am thinking that it would be best to have these de-indexed to clean things up a bit. Thanks for any input.
Intermediate & Advanced SEO | | keL.A.xT.o0 -
Why are these pages considered duplicate content?
I have a duplicate content warning in our PRO account (well several really) but I can't figure out WHY these pages are considered duplicate content. They have different H1 headers, different sidebar links, and while a couple are relatively scant as far as content (so I might believe those could be seen as duplicate), the others seem to have a substantial amount of content that is different. It is a little perplexing. Can anyone help me figure this out? Here are some of the pages that are showing as duplicate: http://www.downpour.com/catalogsearch/advanced/byNarrator/narrator/Seth+Green/?bioid=5554 http://www.downpour.com/catalogsearch/advanced/byAuthor/author/Solomon+Northup/?bioid=11758 http://www.downpour.com/catalogsearch/advanced/byNarrator/?mediatype=audio+books&bioid=3665 http://www.downpour.com/catalogsearch/advanced/byAuthor/author/Marcus+Rediker/?bioid=10145 http://www.downpour.com/catalogsearch/advanced/byNarrator/narrator/Robin+Miles/?bioid=2075
Intermediate & Advanced SEO | | DownPour0 -
Should we use the rel-canonical tag?
We have a secure version of our site, as we often gather sensitive business information from our clients. Our https pages have been indexed as well as our http version. Could it still be a problem to have an http and an https version of our site indexed by Google? Is this seen as being a duplicate site? If so can this be resolved with a rel=canonical tag pointing to the http version? Thanks
Intermediate & Advanced SEO | | annieplaskett1 -
About robots.txt for resolve Duplicate content
I have a trouble with Duplicate content and title, i try to many way to resolve them but because of the web code so i am still in problem. I decide to use robots.txt to block contents that are duplicate. The first Question: How do i use command in robots.txt to block all of URL like this: http://vietnamfoodtour.com/foodcourses/Cooking-School/
Intermediate & Advanced SEO | | magician
http://vietnamfoodtour.com/foodcourses/Cooking-Class/ ....... User-agent: * Disallow: /foodcourses ( Is that right? ) And the parameter URL: h
ttp://vietnamfoodtour.com/?mod=vietnamfood&page=2
http://vietnamfoodtour.com/?mod=vietnamfood&page=3
http://vietnamfoodtour.com/?mod=vietnamfood&page=4 User-agent: * Disallow: /?mod=vietnamfood ( Is that right? i have folder contain module, could i use: disallow:/module/*) The 2nd question is: Which is the priority " robots.txt" or " meta robot"? If i use robots.txt to block URL, but in that URL my meta robot is "index, follow"0 -
How to resolve Duplicate Page Content issue for root domain & index.html?
SEOMoz returns a Duplicate Page Content error for a website's index page, with both domain.com and domain.com/index.html isted seperately. We had a rewrite in the htacess file, but for some reason this has not had an impact and we have since removed it. What's the best way (in an HTML website) to ensure all index.html links are automatically redirected to the root domain and these aren't seen as two separate pages?
Intermediate & Advanced SEO | | ContentWriterMicky0 -
How permanent is a rel="canonical"?
We are rolling out our canonicals now, and we were wondering: what happens if we decide we did this wrong and need to change where canonicals point? In other words, how bad of a thing is it to have a canonical tag point to page a for a while, then change it to point to page b? I'm just curious to see how permanent of a decision we are making, and how bad it will be if we screwed up and need to change later. Thanks!
Intermediate & Advanced SEO | | CoreyTisdale0