Best to Fix Duplicate Content Issues on Blog If URLs are Set to "No-Index"
-
Greetings Moz Community:
I purchased a SEMrush subscription recently and used it to run a site audit.
The audit detected 168 duplicate content issues mostly relating to blog posts tags. I suspect these issues may be due to canonical tags not being set up correctly.
My developer claims that since these blog URLs are set to "no-index" these issues do not need to be corrected. My instinct would be to avoid any risk with potential duplicate content. To set up canonicalization correctly. In addition, even if these pages are set to "no-index" they are passing page rank. Further more I don't know why a reputable company like SEMrush would consider these errors if in fact they are not errors.
So my question is, do we need to do anything with the error pages if they are already set to "no-index"? Incidentally the site URL is www.nyc-officespace-leader.com. I am attaching a copy of the SEMrush audit.
Thanks, Alan
-
Thanks for cleaning that up, Dennis. That is great advice.
-
I encounter sometimes that with my clients. The basic thing to do is just to add a canonical since they are already noindexed especially for themes that utilize certain pages within a page. Crazy sounding but some themes actually does this so you can't remove the duplicate page, so noindexing it then adding a canonical is already good enough.
But since you mentioned these are just tags, then simply noindexing them is fine. (I'm assuming these are just basic wordpress tags)
As for your pagination question, use a canonical to link to a URL where all the posts are shown. That's the basic rule for that situation and it's somewhere in Google guidelines about pagination
-
Hi Reserve:
Thanks for your response.
Google is able to view this content because of links that go to and from it? So I am not protected by the no-index tag?
I am very unfamiliar with the strange tags generated by Wordpress. Do you think that such tags as the following can be removed without any detrimental effect? If the URLS for these tags are removed should there be redirects added? http://www.nyc-officespace-leader.com/blog/tag/boutique-space, http://www.nyc-officespace-leader.com/blog/tag/meatpacking-district, http://www.nyc-officespace-leader.com/blog/tag/restaurant-space, http://www.nyc-officespace-leader.com/blog/tag/retail-space, http://www.nyc-officespace-leader.com/blog/tag/store-space, http://www.nyc-officespace-leader.com/blog/tag/the-plaza-district, http://www.nyc-officespace-leader.com/blog/tag/times-square, http://www.nyc-officespace-leader.com/blog/tag/chelsea, http://www.nyc-officespace-leader.com/blog/tag/upper-east-side, http://www.nyc-officespace-leader.com/blog/tag/upper-west-side
Also, should canonical tags be added to blog URLs even if they are set to no-index? For example:
http://www.nyc-officespace-leader.com/blog/page/2
http://www.nyc-officespace-leader.com/blog/page/3
http://www.nyc-officespace-leader.com/blog/page/4
Thanks, Alan
-
I would remove them, to be safe. Google sees them regardless of the "no-index", and I think that the cleaner you can get your data, the better off you will be in the long run. While there may be no harm at this time, things always change. I know one thing for sure, and that is that you don't want a duplicate content issue.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
URL indexed but not submitted in sitemap, however the URL is in the sitemap
Dear Community, I have the following problem and would be super helpful if you guys would be able to help. Cheers Symptoms : On the search console, Google says that some of our old URLs are indexed but not submitted in sitemap However, those URLs are in the sitemap Also the sitemap as been successfully submitted. No error message Potential explanation : We have an automatic cache clearing process within the company once a day. In the sitemap, we use this as last modification date. Let's imagine url www.example.com/hello was modified last time in 2017. But because the cache is cleared daily, in the sitemap we will have last modified : yesterday, even if the content of the page did not changed since 2017. We have a Z after sitemap time, can it be that the bot does not understands the time format ? We have in the sitemap only http URL. And our HTTPS URLs are not in the sitemap What do you think?
Intermediate & Advanced SEO | | ZozoMe0 -
Duplicate Titles caused by blog
Hey I've done some research and understand the canonical tags and rel prev and rel next, but I wanted to get someones opinion on if we needed it since the articles are somewhat independent of each in content (there's a focus on both banks and accountants) We have over 68 pages of blog materials http://www.sageworks.com/blog/default.aspx?page=7 through http://www.sageworks.com/blog/default.aspx?page=68 Thanks in advance for your help!
Intermediate & Advanced SEO | | josh1230 -
Fixed "lower-case/mixed-case" Internal Links causing duplicate- Now What?
Hi, So after a site re-launch, Moz crawled it and reported over 150 duplicate content errors. It was determined that it was because of incorrect uses of capitalization in internal links. Using screaming frog, I found all (500+) internal links and fixed them to match the actual URL. Now the site is100% consistent across the board as best I can tell. I am unsure what to do next though. We launched the site with all the internal link errors, and now many of the pages that are indexed and ranked are with the incorrect URL form. Some have said to use a canonical tag. But how can I use a canonical tag on a page doesn't even exist? Same thing with 301. Can I redirect /examplepage to /ExamplePage if only /ExamplePage actually exists? I would really appreciate some advice on what to do. After I fixed the internal links, I waited a week and Moz crawled the site again and reported all the same errors, and then even more. All capitalization. Seems like it's a mess. After I did another Screaming Frog crawl, it showed no duplicates, so I know I was successful in fixing the internals. Help!!
Intermediate & Advanced SEO | | yogitrout10 -
Remove content that is indexed?
Hi guys, I want to delete a entire folder with content indexed, how i can explain to google that content no longer exists?
Intermediate & Advanced SEO | | Valarlf0 -
Removing Dynamic "noindex" URL's from Index
6 months ago my clients site was overhauled and the user generated searches had an index tag on them. I switched that to noindex but didn't get it fast enough to avoid being 100's of pages indexed in Google. It's been months since switching to the noindex tag and the pages are still indexed. What would you recommend? Google crawls my site daily - but never the pages that I want removed from the index. I am trying to avoid submitting hundreds of these dynamic URL's to the removal tool in webmaster tools. Suggestions?
Intermediate & Advanced SEO | | BeTheBoss0 -
Duplicate content on sub-domains?
I have 2 subdamains intented for 2 different countries (Colombia and Venezuela) ve.domain.com and co.domain.com. The site it's an e-commerce with over a million products available so they have the same page with the same content on both sub-domains....the only differences are the prices a payment options. Does google take that as duplicate content? Thanks
Intermediate & Advanced SEO | | daniel.alvarez0 -
How to fix issues regarding URL parameters?
Today, I was reading help article for URL parameters by Google. http://www.google.com/support/webmasters/bin/answer.py?answer=1235687 I come to know that, Google is giving value to URLs which ave parameters that change or determine the content of a page. There are too many pages in my website with similar value for Name, Price and Number of product. But, I have restricted all pages by Robots.txt with following syntax. URLs:
Intermediate & Advanced SEO | | CommercePundit
http://www.vistastores.com/table-lamps?dir=asc&order=name
http://www.vistastores.com/table-lamps?dir=asc&order=price
http://www.vistastores.com/table-lamps?limit=100 Syntax in Robots.txt
Disallow: /?dir=
Disallow: /?p=
Disallow: /*?limit= Now, I am confuse. Which is best solution to get maximum benefits in SEO?0 -
BEING PROACTIVE ABOUT CONTENT DUPLICATION...
So we all know that duplicate content is bad for SEO. I was just thinking... Whenever I post new content to a blog, website page etc...there should be something I should be able to do to tell Google (in fact all search engines) that I just created and posted this content to the web... that I am the original source .... so if anyone else copies it they get penalised and not me... Would appreciate your answers... 🙂 regards,
Intermediate & Advanced SEO | | TopGearMedia0