Wordpress: Tags generate duplicate Content - just delete the tags!?
-
Asking people, they say tags are bad and spamy and as I can see they generate all my duplicate page content issues.
So the big question is, why Google very often prefers to show in SERPS these Tag-URLS... so it can't be too bad! :)))?
Then after some research I found the "Term Optimizer" on Yoast.com ... that should help exactly with this problem but it seems not to be available anymore?
So may be there another plugin that can help... or just delete all tags from my blog? and install permanent redirects?
Is this the solution? -
I don't understand the question. Perhaps submit a new question to create a new thread, and elaborate on the issue.
-
It should take care of the duplicate content issue as it relates to tag pages, but not it will not show as a 404 because the page will still be there, just removed from Google's index.
If you adjusted the settings in Yoast it should apply to all future posts and tags.
-
My blog provides articles for technical queries and repair options. Almost many keywords does have the same sort of steps to define and we are into problems now..
CASE1: We have replaced the keywords and created the same content for all posts.
CASE2: To solve the above SEO loop hole, I have added these tags for relevant steps which follow the same troubleshooting.
Now again tags are bothering me.
What should I do for my work?
-
Hi,
I also have this problem with our site with 50 duplicate pages recorded on Moz, If I do a no index, follow on Yoast will it create 404's or will I only get a 404 if i remove the tags completely?
Also is there a way to set this up for all future blog posts on yoast or will I need to do this every time I submit a new post?
Thanks
Jeh
-
I said the same thing! I couldn't figure out why it wasn't the default either. We've had the blog up since before I worked for my current company, and so there were a bunch of duplicate pages I'm trying to fix now... frustrating!
Best of luck,
Tyler
-
Hi Tyler,
yes I ended up with the simple checking of the "noindex,follow" option for Tag within the YOAST Plugin (Taxonomies-Tab).
Now all the duplicate Content Issues are gone... so why is this option not set per default? hmmm
Thanks
Holger -
Hey!
I was just looking into this same issue for myself, and I figured I'd share the URL from Yoast that ended up answering most of my questions. It's in section 3, gives you some solutions to the duplicate page issue.
Hope this helps,
Tyler
-
Does that validate?
I'm no expert, but I think the trailing slash shouldn't be used with meta elements, except maybe if the declared DocType is XML or XHTML? If you do, leave a space just before it, like in your rel canonical line, for compatibility's sake (old stuff but just in case, it won't hurt).
And shouldn't it be a space between the comma and follow in "noindex, follow"?
-
Yes, thanks,
I'm curious ... and hope it works. and yes it looks fine...
<title>Christo | INLINEAR Digital Marketing & Brasilien Blog</title>
-
Yes, that happens sometimes, just make sure the plugin has implemented the tags correctly and that the head of your pages don't contain any other misplaced meta tags or invalid code, since this could cause the issue.
Then give it a try, it usually works, you'll have to wait until the site is re-crawled dough, so give it some time. Then, if it doesn't work, you could take more drastic measures.
Good luck, and follow Alan's advice of using 'no-index, follow' in your tag if you want to make sure of keeping the link flow.
-
Hi Branagan,
yes I can set noindex, follow in the yoast plugin, but this seems that google does not obey this metatag? See the comment of a User from a blog:
Peter Hinson October 25, 2012 at 7:22 am
Hi, I’ve been using the Yoasts SEO for years now, but unfortunately it does not prevent Google from indexing pagination all the time.
Matt Cutts. the Google technical expert and representative says in his video that the ‘noindex’ meta tag is a weak method in trying to prevent Google from indexing a page, since Googlebot will not always obey this tag.
-
If you are using similar tags for almost every post, yes it will create duplicate content, but If you are using them separately it wont create duplicate content.
For an example I had a movie site, I used tags for the Movie Cast, since every movie has different cast it never created any duplicate content, my that site was SEO miracle.
-
the sites that have tag pages ranking usually have onpage problems or a penalty. Instead of the page, they usually show that tag page....or the privacy page....or maybe the keyword is just not competitive and google has nothing better to put there. Ive seen all these cases.
It's up to you, you can noindex them or whatever you want. As long as you dont go crazy and put 1000 tags on your post and spam links to those tag pages, they shouldnt be a problem. At least in my case.
-
Make sure that is a no-index, follow tag so that you get your link juice back
-
I guess tags are there because they fulfill a function, but if they aren't useful for your users it shouldn't be a problem to erase them, as long as you take the necessary measures not to get tons of nasty 404s, although if what worries you is duplicate content, you could just noindex the tag pages, I think most SEO plugins for WP have this option, else it shouldn't be hard to include the noindex tag in tag.php or whatever file generates it's pages, then you wouldn't have to deprive your users of an useful function (if that's the case, off course).
About your question of Google's preference, I'm not aware of it, but couldn't it be because there are many internal pages linking to tag pages and also with the same anchor?
-
I would get rid of the tags, I don't think users use them, they are more like a gimmick.
Search engines are more likely to rank your primary page now, rather then the pages produced by tags
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate title while setting canonical tag.
Hi Moz Fan, My websites - https://finance.rabbit.co.th/ has run financial service, So our main keywords is about "Insurance" in Thai, But today I have an issues regarding to carnonical tag. We have a link that containing by https://finance.rabbit.co.th/car-insurance?showForm=1&brand_id=9&model_id=18&car_submodel_id=30&ci_source_id=rabbit.co.th&car_year=2014 and setting canonical to this url - https://finance.rabbit.co.th/car-insurance within 5,000 items. But in this case I have an warning by site audit tools as Duplicate Page Title (Canonical), So is that possible to drop our ranking. What should we do, setting No-Index, No-Follow for all URL that begin with ? or keep them like that.
Technical SEO | | ASKHANUMANTHAILAND0 -
Canonical Tags - Do they only apply to internal duplicate content?
Hi Moz, I've had a complaint from a company who we use a feed from to populate a restaurants product list.They are upset that on our products pages we have canonical tags linking back to ourselves. These are in place as we have international versions of the site. They believe because they are the original source of content we need to canonical back to them. Can I please confirm that canonical tags are purely an internal duplicate content strategy. Canonical isn't telling google that from all the content on the web that this is the original source. It's just saying that from the content on our domains, this is the original one that should be ranked. Is that correct? Furthermore, if we implemented a canonical tag linking to Best Restaurants it would de-index all of our restaurants listings and pages and pass the authority of these pages to their site. Is this correct? Thanks!
Technical SEO | | benj20341 -
Determining where duplicate content comes from...
I am getting duplicate content warnings on the SEOMOZ crawl. I don't know where the content is duplicated. Is there a site that will find duplicate content?
Technical SEO | | JML11790 -
Hiding Duplicate Content using Javascript
We have e-commerce site selling books. Besides basic information on books, we have content for “About the book” , “Editorial Reviews”, “About the author” etc. But the content in all these section are duplicate and are available on all sites selling similar books. Our question is: 1.Should we worry about the content being duplicate?2.If yes, then will it by a good idea to hide this duplicate content using javascript or iframe?
Technical SEO | | CyrilWilson0 -
Duplicate content with same URL?
SEOmoz is saying that I have duplicate content on: http://www.XXXX.com/content.asp?ID=ID http://www.XXXX.com/CONTENT.ASP?ID=ID The only difference I see in the URL is that the "content.asp" is capitalized in the second URL. Should I be worried about this or is this an issue with the SEOmoz crawl? Thanks for any help. Mike
Technical SEO | | Mike.Goracke0 -
I am trying to correct error report of duplicate page content. However I am unable to find in over 100 blogs the page which contains similar content to the page SEOmoz reported as having similar content is my only option to just dlete the blog page?
I am trying to correct duplicate content. However SEOmoz only reports and shows the page of duplicate content. I have 5 years worth of blogs and cannot find the duplicate page. Is my only option to just delete the page to improve my rankings. Brooke
Technical SEO | | wianno1680 -
Why do I get duplicate content errors just for tags I place on blog entries?
I the SEO MOZ crawl diagnostics for my site, www.heartspm.com, I am getting over 100 duplicate content errors on links built from tags on blog entries. I do have the original base blog entry in my site map not referencing the tags. Similarly, I am getting almost 200 duplicate meta description errors in Google Webmaster Tools associated with links automatically generated from tags on my blog. I have more understanding that I could get these errors from my forum, since the forum entries are not in the sitemap, but the blog entries are there in the site map. I thought the tags were only there to help people search by category. I don't understand why every tag becomes its' own link. I can see how this falsely creates the impression of a lot of duplicate data. As seen in GWT: Pages with duplicate meta descriptions Pages [Customer concerns about the use of home water by pest control companies.](javascript:dropInfo('zip_0div', 'none', document.getElementById('zip_0zipimg'), 'none', null);)/category/job-site-requirements/tag/cost-of-water/tag/irrigation-usage/tag/save-water/tag/standard-industry-practice/tag/water-use 6 [Pest control operator draws analogy between Children's Day and the state of the pest control industr](javascript:dropInfo('zip_1div', 'none', document.getElementById('zip_1zipimg'), 'none', null);)/tag/children-in-modern-world/tag/children/tag/childrens-day/tag/conservation-medicine/tag/ecowise-certified/tag/estonia/tag/extermination-service/tag/exterminator/tag/green-thumb/tag/hearts-pest-management/tag/higher-certification/tag/higher-education/tag/tartu/tag/united-states
Technical SEO | | GerryWeitz0 -
The Bible and Duplicate Content
We have our complete set of scriptures online, including the Bible at http://lds.org/scriptures. Users can browse to any of the volumes of scriptures. We've improved the user experience by allowing users to link to specific verses in context which will scroll to and highlight the linked verse. However, this creates a significant amount of duplicate content. For example, these links: http://lds.org/scriptures/nt/james/1.5 http://lds.org/scriptures/nt/james/1.5-10 http://lds.org/scriptures/nt/james/1 All of those will link to the same chapter in the book of James, yet the first two will highlight the verse 5 and verses 5-10 respectively. This is a good user experience because in other sections of our site and on blogs throughout the world webmasters link to specific verses so the reader can see the verse in context of the rest of the chapter. Another bible site has separate html pages for each verse individually and tends to outrank us because of this (and possibly some other reasons) for long tail chapter/verse queries. However, our tests indicated that the current version is preferred by users. We have a sitemap ready to publish which includes a URL for every chapter/verse. We hope this will improve indexing of some of the more popular verses. However, Googlebot is going to see some duplicate content as it crawls that sitemap! So the question is: is the sitemap a good idea realizing that we can't revert back to including each chapter/verse on its own unique page? We are also going to recommend that we create unique titles for each of the verses and pass a portion of the text from the verse into the meta description. Will this perhaps be enough to satisfy Googlebot that the pages are in fact unique? They certainly are from a user perspective. Thanks all for taking the time!
Technical SEO | | LDS-SEO0