Should I noindex my categories?
-
Hello! I have created a directory website with a pretty active blog. I probably messed this up, but I pretty much have categories (for my blog) and custom taxonomy (for different categories of services) that are very similar. For example I have the blog category "anxiety therapists" and the custom taxonomy "anxiety".
1- is this a problem for google? Can it tell the difference between archive pages in these different categories even though the names are similar?
2- should I noindex my blog categories since the main purpose of my site is to help people find therapists ie my custom taxonomy?
-
Kind of exiting though. Everytime google picks up on a couple of URLs my rankings shoot up. Its exciting to see ^_^
-
That was part of my apprehension about deindexing my blog categories. They are ranking right now.....but I also pulled a dumb move and set all of my listing categories as noindex in Yoast a couple of months ago. Fixed this a month ago but still waiting on google to pick up on it. That's part of why I'm not sure about all of this. Not sure if things will change when google starts noticing my listing categories.
-
"insead of having "/anxiety" and also "/anxiety-counseling" on the same level, why not have "/conditions/anxiety" and also "/practitioners/anxiety" as well? That way the URLs are different but there's also a hierarchical structure which helps Google to work out which is which"
I've currently got it set up so that blog posts are
/category/anxiety
and listings are under
/listing-category/anxietyWould you say this is sufficient to indicate to google that these two are different?
-
If you have get organic traffic on categories you can index them. İf you dont get any traffic with categories on Serp dont use.
-
It's unlikely that if two pages are both very useful for a query, that Google would de-list one purely because it's from the same domain. If neither page is very high value in terms of content or popularity, what you are suggesting can happen. But instead of taking the 'easy' way out and de-indexing one, your end goal should be to make every page as useful as possible!
You will rarely ever benefit in the SERPs by doing a 'quick easy thing' which adds no value to your site, pages or the wider web. Always ask how you could be informing, educating or entertaining the web in a fresh new way which hasn't previously been done. If you're doing what has been done before, you need to do it at least 3-4x better to steal that audience and exceed the historic popularity of other information sources
If your categories really all are on the same level you might want to address that by having architectural (URL) layers to distinguish the categories. Whenever you say to yourself "I can't do better", that is a big problem as not all of your competitors will share that some mindset. Do you want to be the one who gets ahead? Then you need to push on!
insead of having "/anxiety" and also "/anxiety-counseling" on the same level, why not have "/conditions/anxiety" and also "/practitioners/anxiety" as well? That way the URLs are different but there's also a hierarchical structure which helps Google to work out which is which
I think you're right that your blogs may contain content that is more relevant to the queries which you have specified. That being said, de-indexing them doesn't magically make your commercial pages more relevant. It's not necessarily going to make your commercial pages rank better, or at all. As such - maybe doing heavier CRO on the non-commercial pages would be the most advisable solution!
If you ever find yourself thinking "aha I can do this quick clever thing to make Google do what I want instead of putting their users first" it's almost certainly the wrong tactic
-
Awesome! thank you for your response ^_^. I'm not so concerned about getting one to rank over the other as much as I'm concerned that having one will cause the other not to rank at all or be significantly dampened.
2 problems lol
-
I really couldn't come up with a good category structure, so I have 30-40 categories all on the same level. Its a therapist directory so all of the categories in question are pretty much diagnoses/therapeutic issues. I don't think I could create any better hierarchy....is that really bad?
-
I did something weird :-p. my blog categories are pretty much duplicates of my custom taxonomy but with "therapy" or "counseling" tagged on the end....I think it would be better to have my custom taxonomy set up this way because its about therapists and counselors, whereas my blog is about the subject in question....but my theme is set up in a way that would have made that look bad, resulting in long lists like this:
anxiety counseling
depression counseling
couples counseling
etc.
Do you think this is a problem? Should I go through all of the coding work to change it or would something like this make little difference to google? Ie if someone searches for anxiety therapy would the blog archive "anxiety therapy" be more likely to come up than the archive of actual therapists who work with anxiety called "anxiety" because the names suggest the blogs are more relevant to the search query?
-
-
This is an interesting question and I can see why, with many modern agencies focusing on 'keyword cannibalisation' you would consider this action. What you have to realise is that Google still largely sees the web as a mass of interconnected pages. If your blog categories supply decent enough content to rank for those related terms, there's no guarantee that if you turn them off - Google will make the same evaluation of your business-aimed (service-level) categories instead
That being the case, I'd actually let time and data lead the way. In Google Analytics you will probably find that some service-level categories gain more traffic, whilst for some categories their contextual blog iterations bring in more
You might consider learning more about CRO (Conversion Rate Optimisation). In my opinion, there's rarely a time where turning traffic off is beneficial. But could those blog category URLs be re-designed to point users more easily (and more often) to their commercial counterparts? Probably
I do tend to no-index 'tag' URLs as they are messy and non-hierarchical, they can fudge up your equity flow from A to B. But actual categories with a hierarchical structure? Those are pages which you do want to rank
You might also consider whether there's some clever way to just have one category which lists posts and also commercial offerings on a given thematic basis. Really, architectural unification should be your end goal!
Remember: there's absolutely no guarantee that de-listing one category type would cause the other to rank. They're very different pages with contextually different content. Keep an eye on both and strategize to one day, eventually bring them together. That's what I would do!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I no-index categories of my blog?
I have blog with lots of articles & it also has lots of categories. These categories are currently indexed in the google and moz showing missing title and description for these categories. Should I place no-index tag in all the categories or leave it as it is?
Intermediate & Advanced SEO | | jhakasseo0 -
Should I noindex WooCommerce subcategories?
What's the best practice these days for handling indexing of WooCommerce product subcategories? Example: in the sitemap we have:
Intermediate & Advanced SEO | | btetrault
/product-category-a/
/product-category-a/subcategory-1/
/product-category-a/subcategory-2/
etc. Should the /subcategory-*/ be noindexed, canonical to parent, or stay as indexed? Thanks!2 -
Is it good or bad to add noindex for empty pages, which will get content dynamically after some days
We have followers, following, friends, etc pages for each user who creates account on our website. so when new user sign up, he may have 0 followers, 0 following and 0 friends, but over period of time he can get those lists go up. we have different pages for followers, following and friends which are allowed for google to index. When user don't have any followers/following/friends, those pages looks empty and we get issue of duplicate content and description too short. so is it better that we add noindex for those pages temporarily and remove noindex tag when there are at least 2 or more people on those pages. What are side effects of adding noindex when there is no data on those page or benefits of it?
Intermediate & Advanced SEO | | swapnil120 -
Schema for Product Categories
We have an E commerce site and we have started to implement Schema's. I've looked around quite a bit but could not find any schema's for product categories. Would there be any schema's to add besides an image, description, & occasional PDF?
Intermediate & Advanced SEO | | Mike.Bean0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Meta NOINDEX and links into the pages?
If I have internal links pointing to pages that are META NO INDEX, will Google still index them? Or does that only apply to pages that are linked to from an external domain? Thanks!
Intermediate & Advanced SEO | | bjs20100 -
Yoast SEO Plugin: To Index or Not to index Categories?
Taking a poll out there......In most cases would you want to index or NOT index your category pages using the Yoast SEO plugin?
Intermediate & Advanced SEO | | webestate0 -
Tags, categories or both?
There is so much debate regarding duplicate content, horror stories, losing visitors, being penalized, yada yada... that I am wandering if it's wise to use tags/categories on a WordPress blog. I saw that all major blogs are using these structuring etiquettes and they are all dofollow and meta robots on index, follow. What do you say? It is wise to use tags, categories or both? Should I nofollow them, noindex or follow and index? Or noindex follow? Cheers and thx.
Intermediate & Advanced SEO | | jasmin280