Category Pages For Distributing Authority But Not Creating Duplicate Content
-
I read this interesting moz guide: http://moz.com/learn/seo/robotstxt, which I think answered my question but I just want to make sure.
I take it to mean that if I have category pages with nothing but duplicate content (lists of other pages (h1 title/on-page description and links to same) and that I still want the category pages to distribute their link authority to the individual pages, then I should leave the category pages in the site map and meta noindex them, rather than robots.txt them. Is that correct?
Again, don't want the category pages to index or have a duplicate content issue, but do want the category pages to be crawled enough to distribute their link authority to individual pages.
Given the scope of the site (thousands of pages and hundreds of categories), I just want to make sure I have that right. Up until my recent efforts on this, some of the category pages have been robot.txt'd out and still in the site map, while others (with different url structure) have been in the sitemap, but not robots.txt'd out.
Thanks! Best.. Mike
-
Thanks, Jane! I really appreciate it.
If the now noindexed category pages have already been indexed, do you think I should request removal from the index as well?
Best... Mike
-
"I still want the category pages to distribute their link authority to the individual pages, then I should leave the category pages in the site map and meta noindex them, rather than robots.txt them. Is that correct?"
This will achieve the goal, yes. You would ideally include noindex, follow (as opposed to nofollow) in the meta tag of the page you want to exclude. This means that Google crawls the page in full and allows PageRank to flow from that page to the pages it links to, but doesn't include any of the page's content or its URL in the index.
If you exclude the page via robots.txt, Google never crawls the page at all. You sometimes see URLs whose pages have been excluded via robots.txt showing up in Google's index, because robots.txt doesn't say "don't index this URL"; it simply says "don't crawl it." That's also why excluding a page in robots.txt and putting a noindex meta tag on the page would be redundant - Google would never see the noindex tag because it would never crawl the page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Shall we add engaging and useful FAQ content in all our pages or rather not because of duplication and reduction of unique content?
We are considering to add at the end of alll our 1500 product pages answers to the 9 most frequently asked questions. These questions and answers will be 90% identical for all our products and personalizing them more is not an option and not so necessary since most questions are related to the process of reserving the product. We are convinced this will increase engagement of users with the page, time on page and it will be genuinely useful for the visitor as most visitors will not visit the seperate FAQ page. Also it will add more related keywords/topics to the page.
Intermediate & Advanced SEO | | lcourse
On the downside it will reduce the percentage of unique content per page and adds duplication. Any thoughts about wether in terms of google rankings we should go ahead and benefits in form of engagement may outweight downside of duplication of content?0 -
Duplicate Content
Let's say a blog is publishing original content. Now let's say a second blog steals that original content via bot and publishes it as it's own. Now further assume the original blog doesn't notice this for several years. How much damage could this do to blog A for Google results? Any opinions?
Intermediate & Advanced SEO | | CYNOT0 -
Medical / Health Content Authority - Content Mix Question
Greetings, I have an interesting challenge for you. Well, I suppose "interesting" is an understatement, but here goes. Our company is a women's health site. However, over the years our content mix has grown to nearly 50/50 between unique health / medical content and general lifestyle/DIY/well being content (non-health). Basically, there is a "great divide" between health and non-health content. As you can imagine, this has put a serious damper on gaining ground with our medical / health organic traffic. It's my understanding that Google does not see us as an authority site with regard to medical / health content since we "have two faces" in the eyes of Google. My recommendation is to create a new domain and separate the content entirely so that one domain is focused exclusively on health / medical while the other focuses on general lifestyle/DIY/well being. Because health / medical pages undergo an additional level of scrutiny per Google - YMYL pages - it seems to me the only way to make serious ground in this hyper-competitive vertical is to be laser targeted with our health/medical content. I see no other way. Am I thinking clearly here, or have I totally gone insane? Thanks in advance for any reply. Kind regards, Eric
Intermediate & Advanced SEO | | Eric_Lifescript0 -
I've got duplicate pages. For example, blog/page/2 is the same as author/admin/page/2\. Is this something I should just ignore, or should I create the author/admin/page2 and then 301 redirect?
I'm going through the crawl report and it says I've got duplicate pages. For example, blog/page/2 is the same as author/admin/page/2/ Now, the author/admin/page/2 I can't even find in WordPress, but it is the same thing as blog/page/2 nonetheless. Is this something I should just ignore, or should I create the author/admin/page2 and then 301 redirect it to blog/page/2?
Intermediate & Advanced SEO | | shift-inc0 -
Duplicate Content Dilemma for Category and Brand Pages
Hi, I have a online shop with categories such as: Trousers Shirts Shoes etc. But now I'm having a problem with further development.
Intermediate & Advanced SEO | | soralsokal
I'd like to introduce brand pages. In this case I would create new categories for Brand 1, Brand 2, etc... The text on categories and brand pages would be unique. But there will be an overlap in products. How do I deal with this from a duplicate content perspective? I'm appreciate your suggestions. Best, Robin0 -
Does Google see this as duplicate content?
I'm working on a site that has too many pages in Google's index as shown in a simple count via a site search (example): site:http://www.mozquestionexample.com I ended up getting a full list of these pages and it shows pages that have been supposedly excluded from the index via GWT url parameters and/or canonicalization For instance, the list of indexed pages shows: 1. http://www.mozquestionexample.com/cool-stuff 2. http://www.mozquestionexample.com/cool-stuff?page=2 3. http://www.mozquestionexample.com?page=3 4. http://www.mozquestionexample.com?mq_source=q-and-a 5. http://www.mozquestionexample.com?type=productss&sort=1date Example #1 above is the one true page for search and the one that all the canonicals reference. Examples #2 and #3 shouldn't be in the index because the canonical points to url #1. Example #4 shouldn't be in the index, because it's just a source code that, again doesn't change the page and the canonical points to #1. Example #5 shouldn't be in the index because it's excluded in parameters as not affecting page content and the canonical is in place. Should I worry about these multiple urls for the same page and if so, what should I do about it? Thanks... Darcy
Intermediate & Advanced SEO | | 945010 -
Pages with Little Content
I have a website that lists events in Dublin, Ireland. I want to provide a comprehensive number of listings but there are not enough hours in the day to provide a detailed (or even short) unique description for every event. At the moment I have some pages with little detail other than the event title and venue. Should I try and prevent Google from crawling/indexing these pages for fear of reducing the overall ranking of the site? At the moment I only link to these pages via the RSS feed. I could remove the pages entirely from my feed, but then that mean I remove information that might be useful to people following the events feed. Here is an example page with very little content
Intermediate & Advanced SEO | | andywozhere0 -
Good category pages - do you have examples?
Hello all. Currently doing a major update to my e-commerce website which sells tractor spare parts. I would like to optimize the category pages, which feature the parts from a particular manufacture of tractor parts. Does anyone have good examples of well optimized product page which do not have a detrimental effect on the visual quality of the site? It is important to see the products. The best I have found is: http://www.simplyelectricals.co.uk/ but I sure a better solution must exist Thanks David
Intermediate & Advanced SEO | | DavidLenehan0