Tags on my website cause duplicate content
-
Hi I just recently started a website and I am new to MOZ pro. What Moz pro detected on my website under high priority is that "duplicate page content"
and what I realize about these duplicate page content is regarding the tags i put on my post. Because it is a wordpress blog, we are allow to add tags on the side before we publish our post. And because of these tags, it linked to the same page but different url.
for example
and both these url direct to the same page
So how do i solve this? do i just stop tagging whenever i write a post? delete all tags while it is not necessary?
i seen method like 301 redirect or rel=canonical but is there anyway to solve this problem so I do not face this issue whenever i make a new post in my blog? I mean it doesnt make sense to redirect 301 to every single tags i have whenever i write a new post right?
thanks guys
-
Hi,
Duplicate content is bad for search because it forces pages on your site to compete with each other for rank. If each tag only contains one post then you are duplicating every post twice - once in the original post and once in the tag which is displaying the same post and only that post.
If you use a lot of the same tags for each post, for example you tag every post 'blog' and 'daily’, then those pages will contain the same posts and therefore be duplicate content.
It may be worth checking your analytics to see if any of these pages are getting entrances from organic search, which will tell you if the 'duplicate' is outranking the original post. But often this is because that page contains a lot more information on the subject than a single blog post. So you may not be able to replicate that success with a smaller single blog post.
As the previous answer stated using a robots.txt indicating Disallow: /tags/whatever2 will tell a spider not to crawl that page, you could do it selectively by disallowing only the tags which are being flagged as duplicates, or disallow all tagged pages from being crawled with Disallow: /tags/*
But every site is different and you will need to decide for yourself if your 'duplicate' content is actually harming your site, you might find that those pages are full of keywords naturally and are attracting all your traffic.
Hope that helps!
-
Sorry a huge newbie here,
Assuming if i want to use the "block with meta no index"
on
how do i do so? the guide only teaches how to add a disallow on robot.txt but what about meta?
Do i just add on the tag page?
should i make it too?
-
Tags are good for the user experience, but they can be a problem for search engines because of the seemingly duplicated content. I would use the robots.txt file or noindex meta tag (the meta tag is the preferred option) to block search engines from accessing and indexing tag-based pages.
When Moz's spider sees this markup, it should note that the issue has been addressed and the detections of "duplicated content" should disappear. Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Next JS and Missing content
Hello
Moz Pro | | 4thWhale
We recently migrated our page to next JS which is supposed to be great for SEO
On almost all our pages we are getting the same errors Missing Canonical Tag Missing Title Missing or Invalid H1 Missing Description We don't understand this because we have all of that content on every page. We believe that maybe NextJs is having a incompatibility with Moz. Has anyone had any experience with this?0 -
How to deal with auto generated pages on our site that are considered thin content
Hi there, Wondering how to deal w/ about 300+ pages on our site that are autogenerated & considered thin content. Here is an example of those pages: https://app.cobalt.io/ninp0 The pages are auto generated when a new security researcher joins our team & then filled by each researcher with specifics about their personal experience. Additionally, there is a fair amount of dynamic content on these pages that updates with certain activities. These pages are also getting marked as not having a canonical tag on them, however, they are technically different pages just w/ very similar elements. I'm not sure I would want to put a canonical tag on them as some of them have a decent page authority & I think could be contributing to our overall SEO health. Any ideas on how I should deal w/ this group of similar but not identical pages?
Moz Pro | | ChrissyOck0 -
URL Parameters causing duplicate content - Login/Registration page
All, I just recently acquired a new client and right away I noticed an abundance of duplicate content being recorded after the moz crawl diagnostics was completed. After a quick digest of the issue, it seems that the majority (90%) of the outlined duplicated content is stemming from the client's Login/Registration page. Upon clicking (without being logged-in) any asset or forum discussion board link within the site, the user is automatically redirected to the Login/Registration page, which seems to create this massive redirect loop associated with dynamic url parameters. Ex. After clicking on a select internal link (asset or discussion board) the user is redirected to the Login/Register page which presents the page and a URL that looks a lot this this: Ex. 1 https://www.clientsite.com/register-login?ReturnUr...xxxx%xxxx%xxxx%...... Ex. 2 https://www.clientsite.com**/register-login?returnurl=/register-login?returnurl=/register-login?returnurl=/page-titl**e/ These URLs seem to becoming larger and larger... The client wants to ensure users have to Login/Register within their site before they're allowed to view the content. This process doesn't allow for any type of preview page to be viewed by a user prior to clicking on the internal link, which in turn doesn't allow any preview pages to be indexed. Right now, Moz is picking up all of the redirect and labeling them as duplicate page content/duplicate page titles based on the Login/Registration page. Questions/Comments: Would it be wise to create preview pages for the asset pages and discussion board pages to allow for proper indexing? - Could this be a CMS issue? Current being used on this is, Kentico. There are thousands of pages being recorded in the crawl as duplicate, however only 14 seem to be indexing with duplicate title tags. 301 or canonical redirect strategy? Moz crawl data issue? Again, this is my first look at this issue, so more information is bound to come out soon! Please let me know if anyone has run into this issue and if you have a possible solution to get rid of this redirect loop process. Thanks! -T
Moz Pro | | MattLacuesta0 -
To Worry or Not? Duplicate Content Created from Redirect After Login
One of my Moz reports is flagging duplicate content. For example, https://redchairmarket.com/Account/LogOn?ReturnUrl=%2FAccount%2FSaveSearch%3FsearchId%3D0&searchId=0 and https://redchairmarket.com/Account/LogOn?ReturnUrl=%2FAccount%2FSaveSearch%3FsearchId%3D1&searchId=1 are created when a user logs in and the website sends them back to the page they were looking at before. What is the best way to deal with this duplicate issue? How serious is it? Thank you!
Moz Pro | | BrittanyHighland0 -
Blog Page URLs Showing Duplicate Content
On the SEOMoz Crawl Diagnostics, we are receiving information that we have duplicate page content for the URL Blog pages. For Example: blog/page/33/ blog/page/34/ blog/page/35/ blog/page/36/ These are older post in our blog. Moz is saying that these are duplicate content. What is the best way to fix the URL structure of the pages?
Moz Pro | | _Thriveworks0 -
Duplicate page content on / and index.php
Hi I am new to SEOmoz and in the crawl diagnostics for one of my clients it came back duplicate content on the homepage www.myclient.co.uk and on the www.myclient.co.uk/index.php which is obviously the same page. I understand that the key is to do a 301 redirect from the index to /, however how will I know that this will not just create an ever ending loop on the server? From your experience how is the best way to tackle this crawl error? Also is there a specific question that I need to ask the server?
Moz Pro | | search_shop0 -
Can't find duplicate page content
Hi all. I'm trying to create a report to list all of my site's duplicate content that SEOmoz says we have. However when I click on the link it just shows me the title and description of the page. I don't know what the other page is that has duplicate content or what the duplicate content is. Where do I find this information? Thanks in advance!
Moz Pro | | Info12340 -
Excel tips or tricks for duplicate content madness?
Dearest SEO Friends, I'm working on a site that has over 2,400 instances of duplicate content (yikes!). I'm hoping somebody could offer some excel tips or tricks to managing my SEOMoz crawl diagnostics summary data file in a meaningful way, because right now this spreadsheet is not really helpful. Here's a hypothetical situation to describe why: Say we had three columns of duplicate content. The data is displayed thusly: | Column A | Column B | Column C URL A | URL B | URL C | In a perfect world, this is easy to understand. I want URL A to be the canonical. But unfortunately, the way my spreadsheet is populated, this ends up happening: | Column A | Column B | Column C URL A | URL B | URL C URL B | URL A | URL C URL C | URL A | URL B | Essentially all of these URLs would end up being called a canonical, thus rendering the effect of the tag ineffective. On a site with small errors, this has never been a problem, because I can just spot check my steps. But the site I'm working on has thousands of instances, making it really hard to identify or even scale these patterns accurately. This is particularly problematic as some of these URLs are identified as duplicates 50+ times! So my spreadsheet has well over 100K cells!!! Madness!!! Obviously, I can't go through manually. It would take me years to ensure the accuracy, and I'm assuming that's not really a scalable goal. Here's what I would love, but I'm not getting my hopes up. Does anyone know of a formulaic way that Excel could identify row matches and think - "oh! these are all the same rows of data, just mismatched. I'll kill off duplicate rows, so only one truly unique row of data exists for this particular set" ? Or some other work around that could help me with my duplicate content madness? Much appreciated, you Excel Gurus you!
Moz Pro | | FMLLC0