Application & understanding of robots.txt
-
Hello Moz World!
I have been reading up on robots.txt files, and I understand the basics. I am looking for a deeper understanding on when to deploy particular tags, and when a page should be disallowed because it will affect SEO. I have been working with a software company who has a News & Events page which I don't think should be indexed. It changes every week, and is only relevant to potential customers who want to book a demo or attend an event, not so much search engines. My initial thinking was that I should use noindex/follow tag on that page. So, the pages would not be indexed, but all the links will be crawled.
I decided to look at some of our competitors robots.txt files. Smartbear (https://smartbear.com/robots.txt), b2wsoftware (http://www.b2wsoftware.com/robots.txt) & labtech (http://www.labtechsoftware.com/robots.txt).
I am still confused on what type of tags I should use, and how to gauge which set of tags is best for certain pages. I figured a static page is pretty much always good to index and follow, as long as it's public. And, I should always include a sitemap file. But, What about a dynamic page? What about pages that are out of date? Will this help with soft 404s?
This is a long one, but I appreciate all of the expert insight. Thanks ahead of time for all of the awesome responses.
Best Regards,
Will H.
-
Yup.. also don't forget that robots.txt is just a "recommendation" for robots. they do not obey it
Basically Google does what ever it wants to
Also if you want to block a folder so its inner content wont be "accessed", in case anylink will point to this page, even if its coming from outside of your domain, it will be indexed.. Although the content of it wont be shown on search results but it will show up with a notice stating that the site content is blocked due to the sites robots.txt..best of luck!
-
Great Advice Yossi & Chris. Thanks for taking the time to reply. I will have to dig into the Google Guidelines for additional information, but both of your points are valid. I think I was looking at robots.txt the wrong way. Thanks Again Guys!
-
I completely agree with Yossi here; no need to go blocking that page at all.
I can't really add any further value to the points he has covered but one other part of your question suggested that perhaps you're looking at this the wrong way (and it's very common, don't worry!). Rather than having your site stay as-is and just obscuring the bad parts of it from search engines, the thought process should really around creating a great website instead.
If you're ever considering blocking a page from search engines, the first step should always be "why am I blocking this page(s); could I just fix the issue instead?".
For example, you asked if this might help with soft 404s. Rather than trying to find a way to hide these soft 404s, spend that time fixing them instead!
-
Hi Will
There are some concerns that you have which I do not understand.
Why you want to block News & Events page? If it has unique content and on top of that if it is updated regularly, you have no reason to block access to the page. If it is "relevant to potential customers who want to book a demo" its great. I would definitely keep it indexed and followed.Google explicitly states that you should not block access to a page if you simply want to de-index it/remove it. If the page should not be indexed publicly you should remove it or password protect it (a google suggestion).
About tags, i assume you are talking about meta tags, correct?
There is no need to use any kind of meta tag to signal search engines that they need to index or follow the page, you use it only when you want to limit them not to take certain actions.
Also there is no difference between a static or dynamic page when it comes to tag usage. There is no rules for that. A page perfectly be static for years and still get indexed and ranked very good. (but, well we all know that updating the site is a ranking signal)
If you believe that certain page should be tagged "noindex" it is not because it is not updated within the last month or year. Just for an example: contact us pages, about us pages and terms of use pages. These are super static pages that in many cases probably wont be changed for years.best
Yossi
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
WhoIs, SEO & Privacy
Is WHOIS data used by Google as a ranking signal? We had a website that had some bad SEO work done a while ago hence took a knock, If I use the same WHOIS data on a new site is that likely to cause an issue? Also I don't like the idea of providing too much information for privacy reasons, so have tended to stick to general email addresses and department names rather than actual personal information. Is that a bad approach?
Intermediate & Advanced SEO | | seoman100 -
Ranking for keyword I don't optimize for & Other oddities
Hi Moz Community! I've been working with a clients website for about a year now. They were hit with the original Panda update because of some spammy links from a shady SEO firm. We've made a decent climb back but not a full recovery. There are some weird things happening that I would love some insight into. 1. Ranking for keywords we don't optimize for: I noticed some low keyword volume for a keyword term that is close to our main term, but is slightly different. We don't optimize for this term at all on our website. We rank third for this term, and actually show site links in the result, which doesn't happen for any of our other pages. 2. Index not found when doing site: search: Other oddity is that when you search site:www.mywebsite.com, I see all the pages within the site except the homepage. Not sure whats going on here, but when I fetch the homepage in GWMT, it returns the homepage. When you query the homepage by itself, it also ranks. Any help would be appreciated! Regards, J
Intermediate & Advanced SEO | | artscienceweb0 -
Reviewing Category & Tag policy - Update
I recently (http://moz.com/community/q/less-tags-better-for-seo) started reviewing my category and tag policy, and things have been going very well. I thought I would share what I have done: Removed all tags from site Added unique descriptions for each post for the category excerpt. Only had the category description on the first page and use the description like a post to summarise and interlink to sub-categories or posts. This keeps pages from slipping down the number of clicks until it can be reached, improving link juice distribution. I also reduced the number of posts showing to 5, to allow more focus on the description (main part) of the category post. To add the category description on the first category page only in Wordpress, you need to go to the category.php or archive.php and change: to The overall aim was to have a hierarchal resource contained in the category page description. Whilst this is still a work in progress, you can see an example of what I am trying to achieve here: https://www.besthostnews.com/web-hosting-tutorials/cpanel/ https://www.besthostnews.com/web-hosting-tutorials/cpanel/mail/ If you have any further tips and advice as I continue to implement this (with good results so far), please feel free. Also, you can use the Visual Term Description Editor plugin to allow the wysiwyg editor for the category descriptions.
Intermediate & Advanced SEO | | TheWebMastercom1 -
URL Re-Writes & HTTPS: Link juice loss from 301s?
Our URLs are not following a lot of the best practices found here: http://moz.com/blog/11-best-practices-for-urls We have also been waiting to implement HTTPS. I think it might be time to take the plunge on re-writing the URLs and converting to a fully secure site, but I am concerned about ranking dips from the lost link juice from the 301s. Many of our URLs are very old, with a decent amount of quality links. Are we better off leaving as is or taking the plunge?
Intermediate & Advanced SEO | | TheDude0 -
XML Sitemap & Bad Code
I've been creating sitemaps with XML Sitemap Generator, and have been downloading them to edit on my pc. The sitemaps work fine when viewing in a browser, but when I download and open in Dreamweaver, the urls don't work when I cut and paste them in the Firefox URL bar. I notice the codes are different. For example, an "&" is produced like this..."&". Extra characters are inserted, producing the error. I was wondering if this is normal, because as I said, the map works fine when viewing online.
Intermediate & Advanced SEO | | alrockn0 -
Should all pages on a site be included in either your sitemap or robots.txt?
I don't have any specific scenario here but just curious as I come across sites fairly often that have, for example, 20,000 pages but only 1,000 in their sitemap. If they only think 1,000 of their URL's are ones that they want included in their sitemap and indexed, should the others be excluded using robots.txt or a page level exclusion? Is there a point to having pages that are included in neither and leaving it up to Google to decide?
Intermediate & Advanced SEO | | RossFruin1 -
Canonical Tags & Search Bots
Does anyone know for sure if search engine bots still crawl links on a page whose canonical tags are set to a different page? So in short, would it be similar to a no-index follow? Thanks! -Margarita
Intermediate & Advanced SEO | | MargaritaS0 -
ECommerce syndication & duplicate content
We have an eCommerce website with original software products. We want to syndicate our content to partner and affiliate websites, but are worried about the effect of duplicate content all over the web. Note that this is a relatively high profile project, where thousands of sites will be listing hundreds of our products, with the exact same name, description, tags, etc. We read the wonderful and relevant post by Kate Morris on this topic (here: http://mz.cm/nXho02) and we realize the duplicate content is never the best option. Some concrete questions we're trying to figure out: 1. Are we risking penalties of any sort? 2. We can potentially get tens of thousands of links from this concept, all with duplicate content around them, but from PR3-6 sites, some with lots of authority. What will affect our site more - the quantity of mediocre links (good) or the duplicate content around them (bad)? 3. Should we sacrifice SEO for a good business idea?
Intermediate & Advanced SEO | | erangalp0