Seomoz bar: No Follow and Robots.txt
-
Should the Mozbar pickup 'nofollow" links that are handled in robots.txt ?
the robots.tx blocks categories, but is still show as a followed (green) link when using the mozbar.
Thanks!
Holly
ETA: I'm assuming that- disallow: myblog.com/category/ - is comparable to the nofollow tag on catagory?
-
Thank you Cyrus for that great article link. And like that article states near the end, it touches on a common problem for those of us that assume all the info at SeoMoz is accurate even though it may not be current. (not only seomoz to be fair) I've found several instances where even authorities change their mind or google changes is for them?
But anyways, it appears using canonical or meta tags would be the better solution. Unfortunately,neither is possible in Squarespace. I had just about decided to change the robots.txt , get rid of the disallow: /category/ , and call it a day. But then I found an example where the noindex was used in the robots.txt file of a squarespace website (specializing in SEM among other things). Probably the "longest" robots list I've ever seen!
http://www.hunchfree.com/robots.txt
Would it be a good idea to use noindex, FOLLOW in the robots.txt for /category/
(if that's even possible) or just keep with my "call it a day" solution...at least where robots.txt is concerned.
BTW- I posted a similar question on the reasoning behind the robots.txt for ss websites at the developers forum- nothing but crickets. Unless it's about design, things pretty much drop like a rock. Oh well.
-
As Phil pointed out, blocking a URL with robot.txt may keep search engines from crawling your pages, but that doesn't mean they wont index those pages. The meta robots NOINDEX, FOLLOW tag is a much better choice.
Highly recommend the following article that explains this in more detail:
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions
Unfortunately, Sqarespace isn't all that flexible when it comes to meta tags. For the most part, Google is getting better at figuring this kind of duplicate content out, but it's best to address it when you can.
-
Thank you so much for the detailed reply. It's REALLY appreciated. The blog you are referring to is the Squarespace company's blog. This disallow: categories IS however on any site that uses their service. But I've done a similar search with my personal blog on Squarespace and a couple of categories still show up in the SERPs anyways. You can edit the robot file if you want, but you have to do a redirect as you don't have root access.
Unfortunately, (at least I don't think we can), include meta tags for noindex on a page by page basis. You can use it in robots.txt.
It seems their would be a lot more duplicate content issue with tags rather than categories as it's more granular than categories.
The point of all this is I'm creating new websites for some of our homeschool students and want to get it right from the start with the site architecture and how we use tags and categories with a balanced focus on usability as well as optimizing for search. These kids are super interested in all the reasoning behind things and their questions are tougher than any client! Ha!
Again, Thanks so much and take care,
Holly
-
Thanks for providing some more detail Holly. I definitely think it's applicable to leave here and I'm happy to help.
Some people like to prevent search engines from crawling category pages out of a fear of duplicate content. For example, say you have a post that's at this URL:
site.com/blog/chocolate-milk-is-great.html
and it's also the only post in the category "milk" with this url:
then search engines see the same exact content (your blog post) on two different URLs. Since duplicate content is a big no-no, many people choose to prevent the engines from crawling category pages. Although, in my experience, it's really up to you. Do you feel like your category pages will provide value to users? Would you like them to show up in search results? If so, then make sure you let Google crawl them.
If you DON'T want category pages to be indexed by Google, then I think there's a better choice than using robots.txt. Your best bet is applying the noindex, follow tag to these pages. This tag tells the engines NOT to index this page, but to follow all of the links on it. This is better than robots.txt because robots.txt won't always prevent your site from showing up in search results (that's another long story), but the noindex tag will.
If I'm not making sense at all then please just let me know :).
Lastly, from what I can see on your site and blog, it doesn't look like the category pages for your blog are actually in your robots.txt file. Have someone do a double check.
To check this myself, I just did a google search for this URL:
http://blog.squarespace.com/blog/?category=Roadmap
And it showed up in Google right away. Looks like something isn't going according to plan. Don't worry though, that happens all of the time and it should be an easy fix.
-
I know one day i may wakeup one morning and this will all click, but for now perhaps an example will help me get past this initial hurdle.
Squarespace disallows categories in the robots.txt, but using the mozbar I see the category links are green.
So if I understand (partly anyways), the disallow in robots keeps the bots from crawling those pages when they come knocking at my site. However, the category links in a blog post are being crawled? or what's the point?
I'm just trying to understand the reasoning behind disallowing categories and how that should impact the tagging and categorizing of blog posts.
Perhaps I should of started a new question? or is it applicable to leave it here..
-
The nofollow attribute and robots.txt file serve different purposes.
Nofollow Attribute
This attribute is used to tell search engines, "Don't follow this link", or even "Don't follow any links on this page." It doesn't prevent pages from being indexed, just prevents the search engines from following that link from that particular page.
Robots.txt
This file contains a list of pages that the search engine should not access and should not index.
To read more about robots.txt check out this page: http://googleblog.blogspot.com/2007/01/controlling-how-search-engines-access.html
For more on Nofollow, check out this page: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=96569
Hope this helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
SEOmoz PRO: How to manage a Site with 2 languages on the same domain - without mixing up data?
I want to track a "rootdomain" that has two languages on it, the english version is in a subfolder /en/ 1. http://website.de
Moz Pro | | inlinear
2. http://website.de/en/ I want to manage and track each language-version isolated. So I will setup:
1. http://website.de as campaign DE - german
= as Root Domain
But as there are links to the /en/ Subfolder these data will also be included in all reports. And there is still no option in SEOmoz PRO to exclude folders or even urls.?! This will be bad when wanting a clear report of just one Language Version. 2. http://website.de/en as campaing EN - english
To track as "Subfolder" will not work beacause this option will only consider exactly this subfolder... So is there a way to see data just only for one language Version?0 -
SEOmoz Keyword Difficulty Tool been down for a few days?
Hi All, I notice the SEO moz keyword difficulty tool has been down for a few days!!! I know from support that they say it is going to be a "while" till it fixed, but some type of estimation on how long it will be will be good. Also in regards to the types of accounts, why do the top accounts have the same limitations as the 79/month tool in regards to the keyword tool reports (50 max and 5 per scan)? I mean this is probably a wider question for the SEOmoz team need to answer. Kind Regards.
Moz Pro | | ColumbusAustralia2 -
SEOMoz Software
I want to start off with stating that i am truly an advocate of SEOMoz and the great stuff they have done for the inbound community that we all know and love. I've been an active member since July 2010 and a paying pro member since December 2010. The software has always been monumental in helping my clients achieve their goals. However, in the past few months i have received nothing short of buggy unreliable software. The keyword difficulty tool never returns difficulty results. The Adwords data has been gone since i can remember. The rank tracker tool is successfull close to 1 out of 5 times. OSE is updated terribly slow compared to competitors. Plus, I have had to write emails to get my campaigns to be manually refreshed to see new ranking data. I have simply missed deadlines because my data is always delayed or missing from the software. Am i an anomaly here? does anyone have these problems? I have been researching some new tools as a replacement but i have yet to find anything as robust as the old SEOMoz. I'd love some feedback. Cheers - Kyle
Moz Pro | | kchandler0 -
SEOmoz not displaying correct amount of links?
When I go to the link analysis page where ti shows how many links my site has and how many my competitors have...it shows that I have 0 links. But Google Webmaster Tools shows my site as having 149 links. Is this a glitch with zeomoz or whats going on? The reason I initially subscribed to seomoz was to track my links. Thanks
Moz Pro | | tarik30010 -
How do I add a second email to SEOmoz?
Is there a way to add a second email adress to my SEOmoz account, so that 2 people can get notifications from here?
Moz Pro | | wellbo0 -
To block with robots.txt or canonicalize?
I'm working with an apt community with a large number of communities across the US. I'm running into dup content issues where each community will have a page such as "amenities" or "community-programs", etc that are nearly identical (if not exactly identical) across all communities. I'm wondering if there are any thoughts on the best way to tackle this. The two scenarios I came up with so far are: Is it better for me to select the community page with the most authority and put a canonical on all other community pages pointing to that authoritative page? or Should i just remove the directory all-together via robots.txt to help keep the site lean and keep low quality content from impacting the site from a panda perspective? Is there an alternative I'm missing?
Moz Pro | | JonClark150 -
Discrepancies in PA and LRDs reported in different SEOmoz tools
I've noticed a difference in the reported PA and LRD numbers for URLs depending on whether you use Open Site Explorer, or look at the same metrics from within the rankings history (in your campaign set up). I've checked this for a few URLs and what I'm seeing is the reported scores for PA and LRDs is different 9 times out of ten. The PA is sometiomes higher on one report, lower on another, or vice versa. Same for LRDs. I thought it might be because one report was lagging behind and using old data, but that would only make sense if I was seeing an increase in reported LRDs, but it just as often shows a decrease ! Is this just a bug in the campaign>rankings history report or is there a reason for the discrepancies?
Moz Pro | | Websensejim0 -
SEOmoz Bot indexing JSON as content
Hello, We have a bunch of pages that contain local JSON we use to display a slideshow. This JSON has a bunch of<a links="" in="" it. <="" p=""></a> <a links="" in="" it. <="" p="">For some reason, these</a><a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p=""></a> <a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p="">One example page this is happening on is: http://www.trendhunter.com/trends/a2591-simplifies-product-logos . Searching for the string '<a' yields="" 1100+="" results="" (all="" of="" which="" are="" recognized="" as="" links="" for="" that="" page="" in="" seomoz),="" however,="" ~980="" these="" json="" code="" and="" not="" actual="" on="" the="" page.="" this="" leads="" to="" a="" lot="" invalid="" our="" site,="" super="" inflated="" count="" on-page="" page. <="" span=""></a'></a> <a links="" that="" are="" in="" json="" being="" indexed="" and="" recognized="" by="" the="" seomoz="" bot="" showing="" up="" as="" legit="" for="" page. <="" p="">Is this a bug in the SEOMoz bot? and if not, does google work the same way?</a>
Moz Pro | | trendhunter-1598370