Duplicate content due to csref
-
Hi,
When i go trough my page, i can see that alot of my csref codes result in duplicate content, when SeoMoz run their analysis of my pages.
Off course i get important knowledge through my csref codes, but im quite uncertain of how much it effects my SEO-results.
Does anyone have any insights in this? Should i be more cautios to use csref-codes or dosent it create problems that are big enough for me to worry about them.
-
Yes, to set up rel-canonical properly, every page that could conceivably be tagged with a csref= parameter should have a self-referencing canonical. The tags are easy to set up, in theory, but once you get into a large site and/or CMS, setting them up on dozens or hundreds of pages can be tricky. Ultimately, it's a more effective approach that has some other benefits (like scooping up stray duplicates that may have been created by other URL parameters), but it really depends on your development resources and how complex your site is.
-
Hi,
Thanks for quick and competent reply.
I guess the reason that google only have registred 8 pages, is that many of the pages we have csref on are campaign pages, and they havent "lived" for so long yet.
As i understand you, there are two ways to proceed with this. One beeing informing in Google Webmaster Tools that google should ignore the csref-parameter, and the other beeing the canonical links.
The first is quite straight forward i guess, its just a matter of registrering in Google Webmaster Tools, that all URLs with www.tryg.dk as main domain, should not be followed by Google.
The latter im not that sure of how to proceed with, its a matter of registrering every page with csref with a cannoical link? Or how is the best way to proceed with that.
-
The good news is that you only seem to have about 8 of these pages in the Google index. You can use this query on Google to see them:
site:www.tryg.dk inurl:csref
Ideally, I'd use the canonical tag on those pages to strip out the parameter and de-index any duplicates, but across the site that can be tricky. You could also tell Google Webmaster Tools to ignore the csref parameter via parameter handling - it's not quite as robust a solution, but it's a lot easier to implement.
-
Hi,
Thanks for your reply.
It is excactly a URL generated based on our tracking codes, f.eg. when i look in the list of duplicated content on our page here in SeoMoz, i get the following URLs:
http://www.tryg.dk/om-tryg/fakta-om-tryg/samarbejdspartnere/index.html?csref=Disclaimer_Nordea http://www.tryg.dk/om-tryg/fakta-om-tryg/samarbejdspartnere/index.html?csref=Bundmenu_Om_Tryg_Vores_Partnere
http://www.tryg.dk/om-tryg/fakta-om-tryg/samarbejdspartnere/index.html
The latter beeing the "original" page for this, and the two above beeing page URLs generated by URLs with csrefs, which are generated by our tracking via Omniture.
So my question is how i make sure that it do not have a negative effect on our SEO.
-
Apologies, but I'm not familiar with the csref parameter - could you tell me what information it passes or give me a sample URL (you can make it generic and mask your domain info)?
It sounds like some kind of tracking code, in which case it can definitely start to create duplicate content issues. You could probably use the rel=canonical tag to make Google "collapse" those pages, or you could tell Google to ignore the parameter in Google Webmaster Tools. Neither should impact your tracking.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Minimising the effects of duplicate content
Hello, We realised that one of our clients, copied a large part of content from our website to his. The normal reaction would be to send a cease and desist letter. Nevertheless this would probably mean loosing a good client. The client dumped the text of several articles (for example:
Technical SEO | | Lvet
http://www.velascolawyers.com/en/property-law/136-the-ley-de-costas-coastal-law.html ) Into the same page:
http://www.freundlinger-partners.com/en/home/faqs-property-law/ I convinced the client to place our authorship tags on this page, but I am wondering if this is enough. What do you think? Cheers
Luca0 -
Duplicate Content Question
I have a client that operates a local service-based business. They are thinking of expanding that business to another geographic area (a drive several hours away in an affluent summer vacation area). The name of the existing business contains the name of the city, so it would not be well-suited to market 'City X' business in 'City Y'. My initial thought was to (for the most part) 'duplicate' the existing site onto a new site (brand new root domain). Much of the content would be the exact same. We could re-word some things so there aren't entire lengthy paragraphs of identical info, but it seems pointless to completely reinvent the wheel. We'll get as creative as possible, but certain things just wouldn't change. This seems like the most pragmatic thing to do given their goals, but I'm worried about duplicate content. It doesn't feel as though this is spammy though, so I'm not sure if there's cause for concern.
Technical SEO | | stevefidelity0 -
174 Duplicate Content Errors
How do I go about fixing these errors? There are all related to my tags. Thank you in advance for any help! Lisa
Technical SEO | | lisarein0 -
Link Structure & Duplicate Content
I am struggling with how I should handle the link structure on my site. Right now most of my pages are like this: Home -> Department -> Service Groups -> Content Page For Example: Home -> IT Solutions -> IT Support & Managed Services -> IT Support Home -> IT Solutions -> IT Support & Managed Services -> Managed Services Home -> IT Solutions -> IT Support & Managed Services -> Help Desk Services Home -> IT Solutions -> Virtualization & Data Center Solutions -> Virtualization Home -> IT Solutions -> Virtualization & Data Center Solutions -> Data Center Solutions This structure lines up with our business and makes logical sense but I am not sure how to handle the department and service group pages. Right now you can click them and it just brings you to a page with a small snippet for the links below. The real content is on the content pages. What I am worried about is that the snippets on those pages are just a paragraph or two of the content that's on the content page. Will this hurt me and get considered duplicate content? What is the best practice for dealing with this? Those department/service group pages have some good content on them but it's just parts of other pages. Am I okay doing this because there are not direct duplicates of other pages just parts of a few pages? Any help on this would be great. Thanks in advance.
Technical SEO | | ZiaTG0 -
How to get rid of duplicate content
I have duplicate content that looks like http://deceptionbytes.com/component/mailto/?tmpl=component&link=932fea0640143bf08fe157d3570792a56dcc1284 - however I have 50 of these all with different numbers on the end. Does this affect the search engine optimization and how can I disallow this in my robots.txt file?
Technical SEO | | Mishelm1 -
Is there an easier way from the server to prevent duplicate page content?
I know that using either 301 or 302 will fix the problem of duplicate page content. My question would be; is there an easier way of preventing duplicate page content when it's an issue with the URL. For example: URL: http://example.com URL: http://www.example.com My guess would be like it says here, that it's a setting issue with the server. If anyone has some pointers on how to prevent this from occurring, it would be greatly appreciated.
Technical SEO | | brianhughes2 -
Is 100% duplicate content always duplicate?
Bit of a strange question here that would be keen on getting the opinions of others on. Let's say we have a web page which is 1000 lines line, pulling content from 5 websites (the content itself is duplicate, say rss headlines, for example). Obviously any content on it's own will be viewed by Google as being duplicate and so will suffer for it. However, given one of the ways duplicate content is considered is a page being x% the same as another page, be it your own site or someone elses. In the case of our duplicate page, while 100% of the content is duplicate, the page is no more than 20% identical to another page so would it technically be picked up as duplicate. Hope that makes sense? My reason for asking is I want to pull latest tweets, news and rss from leading sites onto a site I am developing. Obviously the site will have it's own content too but also want to pull in external.
Technical SEO | | Grumpy_Carl0 -
Duplicate Content Issue
Hello, We have many pages in our crawler report that are showing duplicate content. However, the content is not duplicateon the pages. It is somewhat close, but different. I am not sure how to fix the problem so it leaves our report. Here is an example. It is showing these as duplicate content to each other. www.soccerstop.com/c-119-womens.aspx www.soccerstop.com/c-120-youth.aspx www.soccerstop.com/c-124-adult.aspx Any help you could provide would be most appreciated. I am going through our crawler report and resolving issues, and this seems to be big one for us with lots in the report, but not sure what to do about it. Thanks
Technical SEO | | SoccerStop
James0