Should I set blog category/tag pages as "noindex"? If so, how do I prevent "meta noindex" Moz crawl errors for those pages?
-
From what I can tell, SEO experts recommend setting blog category and tag pages (ie. "http://site.com/blog/tag/some-product") as "noindex, follow" in order to keep the page quality of indexable pages high. However, I just received a slew of critical crawl warnings from Moz for having these pages set to "noindex." Should the pages be indexed? If not, why am I receiving critical crawl warnings from Moz and how do I prevent this?
-
In the situation outline by the OP, these pages are noindexed. There’s no value to clutterig up crawl reports on these pages. Block rogerbot from non-critical parts of your site, unless you want to be alerted of issues, then don’t.
-
Thanks, I'm not concerned about the crawl depth of the search engine bots, there is nothing in your fix that would affect that, I'm curious of the decrease in crawl depth of the site with the Moz as we use that to spot issues with the site.
One of the clients I implemented the fix on went from 4.6K crawled pages to 3.4K and the fix would have removed an expected 1.2K pages.
The other client went from 5K to 3.7K and the fix would have removed an expected 1.3K pages.
TL;DR - Good News everybody, the robots.txt fix didn't reduce the crawl depth of the moz crawler!
-
I agree, unfortunately Moz doesn't have an internal disallow feature that gives you the option to feed them info on where rogerbot can and can't go. I haven't come across any issues with this approach, crawl depth by search engine bots will not be affected since the user-agent is specified.
-
Thanks for the solution! We have been coming across a similar issue with some of our sites and I although I'm not a big fan of this type of workaround, I don't see any other options and we want to focus on the real issues. You don't want to ignore the rule in case other pages that should be indexed are marked noindex by mistake.
Logan, are you still getting the depth of crawls after making this type of fix? Have any other issues arisen from this approach?
Let us know
-
Hi Nichole,
You're correct in noindexing these pages, they serve little to no value from an SEO perspective. Moz is always going to alert you of noindex tags when they find them since it's such a critical issue if that tag shows up in unexpected places. If you want to remove these issues from your crawl report, add the following directive to your robots.txt file, this will prevent Moz from crawling these URLs and therefore reporting on them:
User-agent: rogerbot
Disallow: /tag/
Disallow: /category/*edit - do not prevent all user-agents from crawling these URLs, as it will prevent search engines from seeing your noindex tag, they can't obey what they aren't permitted to see. If you want, once all tag & category pages have been removed from the index, you can update your robots.txt to remove the rogerbot directive and add the disallows for tag & category to the * user agent.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Abnormal crawl issues appearing in my Moz results
I have been asked to look at a site for a friend and was more than surprised to see 16,9k crawl issues appear in the dashboard... of this 6,238 are duplicate page content and 5878 are duplicated page titles. What on earth is going on? I have spoken to the web developer as it appears there is a dev site somewhere and this is his response [Can I stress that Google determines which site was in the index first and then removes other sites it sees as having duplicate content. Our dev sites appearing in the search index would not affect your ranking due to duplicate content as Google would see your site as the first site with the content] As I cannot make contact with him, I am scratching my head, surely a dev site should be no-indexed, it sounds as though he is saying that its ok because Google will take the main site as the first site with the content... Very confused! Help need MOZ community. Manythanks, Sarah
Moz Pro | | Mutatio_Digital0 -
Error in Moz duplicate content reports
Hi - I've run the Moz campaign on a client's site. Moz is saying that there are duplicate content errors, and when I look at the errors it is showing that they are all to do with the non-www URLs having being duplicated in the www form of the URLs. However this is not the case - all the non-www URLs are all 301 redirected to the www URLs. Is this an error in the Moz tool? Has anybody experienced something similar?
Moz Pro | | rorynatkiel0 -
Noindex/nofollow on blog comments; is it good or bad ?
Hi, I changed the design of one my wordpress website at the beginning of the month. I also added a "facebook seo comments" plugin to rewrite facebook comments as normal comments. As most of the website comments are facebook comments, I went from 250 noindex/nofollow comments to 950; URL's are ?replytocom=4822 etc. Moz campaign noticed it and I'm asking myself : is it good to have comments in noindex/nofollow ? Should I do something about this ? Erwan.
Moz Pro | | johnny1220 -
Drop in Number of Crawled pages by SEOMOZ?
I noticed that the number of Crawled Pages on my website has been 2 pages only over past week. Before that the number of crawled pages was over 1000. My site has numerous pages as it is a Travel website that pulls search results for Flights, Cars, Hotels, Cruises and Vacation packages so there is a huge Database there. Can someone help? Thanks !
Moz Pro | | sherohass0 -
Amount of Pages Crawled Dropped Significantly
I am just wondering if something changed with the SEOMoz crawler. I was always getting 10,000 or near 10,000 pages crawled. After the last two crawls I am ending up around 2500 pages. Has anything changed that I would need to look at it see if I am blocking the crawler or something else?
Moz Pro | | jeffmace0 -
Wild fluctuation in number of pages crawled
I am seeing huge fluctuations in the number of pages discovered the crawl each week. Some weeks the crawl discovers > 10,000 pages and other weeks I am seeing 4-500. So, this week for example I was hoping to see some changes reflected for warnings from last weeks report (which discovered > 10,000 pages). However, the entire crawl this week was 448 pages. The number of pages discovered each week seems to go back and forth between these two extremes. The more accurate count would be nearer the 10,000 mark than the 400 range. Thanks. Mark
Moz Pro | | MarkWill0 -
Different Moz Scores Between Page and Subdomain
The Moz scores for a page-specific website:: www.website.com are different than the subdomain: www.website.com/. www.website.com: Page MozRank: 4.58
Moz Pro | | DanSpeicher
Page MozTrust: 4.9 www.website.com/ Subdomain MozRank: 5.21
Subdomain MozTrust 5.18 Does that have any significance? Should I be routing pages to the " / " version? Thanks, Daniel0 -
Is the "Too Many Links" metric a blunt instrument?
The SEO Moz crawl diagnostics suggest that we have too many on-page links on several pages including on our homepage. How does your system determine when a page has too many links. It looks like when a page as more than 100 links it’s too many. Should the system take into account the page authority, domain authority, depth of page or other metrics? In addition, no-follow links are being included. As these are dropped from Google’s link graph, does it matter if we have too many? For example, we could have 200 links, 120 of which are no follow. Your tools would tell us we have too many links. Feedback appreciated. Donal
Moz Pro | | AdiRste0