Robots.txt | any SEO advantage to having one vs not having one?
-
Neither of my sites has a robots.txt file. I guess I have never been bothered by any particular bot enough to exclude it.
Is there any SEO advantage to having one anyways?
-
It's good practice, especially if you are operating a CMS that can create accessible URLs that cause duplicate content problems, create "junk" pages, etc. For example: http://www.asos.com/robots.txt
Google dislikes search results pages being indexed, so you can block those off, e.g. http://moz.com/robots.txt
You can disallow the archive.org bot if you don't want old versions of your site appearing in its search engine, and as others have said you can point to your xml sitemap.
It's not a bad resource to have at your disposal for site hygiene / maintenance reasons, but it's not an absolute necessity either.
-
There are actually a couple good reasons but in short, it's "best practice" so it won't hurt by adding it in. It wont take more than a couple minutes.
-
Just good practice. One SEO advantage would be to include a reference to your sitemap within the robots.txt file.
Aside from that, if you want all of your pages crawled and don't have a sitemap (although you should), no need for a robots.txt file.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I have two robots.txt pages for www and non-www version. Will that be a problem?
There are two robots.txt pages. One for www version and another for non-www version though I have moved to the non-www version.
Technical SEO | | ramb0 -
Craft CMS SEO Resources
I'm just starting out in freelance SEO & I've taken on a client who is using Craft CMS (version 2.0ish) for their site. I am not even close to being competent enough to manually code via Twig, but I had the main developer install the SEOmatic plugin for me. My question from here is - are there any resources or tips I should be aware of starting out? I just started by updating meta title/descriptions via "New Template Meta(s)" but I'm a bit concerned i'm doing the "template path" thing right - I haven't seen any visible changes in browser, and the SERP preview I'm getting is giving me a broken link. But i'm doing a fresh Moz crawl right now to see if the changes took place or not. so 1. Am I on the right track? 2. How long does it typically take for changes to start to show? 3. Is there anything I should be aware of? any follow up questions just let me know, I'll be following this thread!
Technical SEO | | dig_ad_austin0 -
Link to AMP VS AMP Google Cache VS Standard page?
Hi guys, During the link building strategy, which version should i prefer as a destination between: to the normal version (php page) to the Amp page of the Website to the Amp page of Google Cache The main doubt is between AMP of the website or standard Version. Does the canonical meta equals the situation or there is a better solution? Thank you so mutch!
Technical SEO | | Dante_Alighieri0 -
Robots.txt blocking Addon Domains
I have this site as my primary domain: http://www.libertyresourcedirectory.com/ I don't want to give spiders access to the site at all so I tried to do a simple Disallow: / in the robots.txt. As a test I tried to crawl it with Screaming Frog afterwards and it didn't do anything. (Excellent.) However, there's a problem. In GWT, I got an alert that Google couldn't crawl ANY of my sites because of robots.txt issues. Changing the robots.txt on my primary domain, changed it for ALL my addon domains. (Ex. http://ethanglover.biz/ ) From a directory point of view, this makes sense, from a spider point of view, it doesn't. As a solution, I changed the robots.txt file back and added a robots meta tag to the primary domain. (noindex, nofollow). But this doesn't seem to be having any effect. As I understand it, the robots.txt takes priority. How can I separate all this out to allow domains to have different rules? I've tried uploading a separate robots.txt to the addon domain folders, but it's completely ignored. Even going to ethanglover.biz/robots.txt gave me the primary domain version of the file. (SERIOUSLY! I've tested this 100 times in many ways.) Has anyone experienced this? Am I in the twilight zone? Any known fixes? Thanks. Proof I'm not crazy in attached video. robotstxt_addon_domain.mp4
Technical SEO | | eglove0 -
Easy Question: regarding no index meta tag vs robot.txt
This seems like a dumb question, but I'm not sure what the answer is. I have an ecommerce client who has a couple of subdirectories "gallery" and "blog". Neither directory gets a lot of traffic or really turns into much conversions, so I want to remove the pages so they don't drain my page rank from more important pages. Does this sound like a good idea? I was thinking of either disallowing the folders via robot.txt file or add a "no index" tag or 301redirect or delete them. Can you help me determine which is best. **DEINDEX: **As I understand it, the no index meta tag is going to allow the robots to still crawl the pages, but they won't be indexed. The supposed good news is that it still allows link juice to be passed through. This seems like a bad thing to me because I don't want to waste my link juice passing to these pages. The idea is to keep my page rank from being dilluted on these pages. Kind of similar question, if page rank is finite, does google still treat these pages as part of the site even if it's not indexing them? If I do deindex these pages, I think there are quite a few internal links to these pages. Even those these pages are deindexed, they still exist, so it's not as if the site would return a 404 right? ROBOTS.TXT As I understand it, this will keep the robots from crawling the page, so it won't be indexed and the link juice won't pass. I don't want to waste page rank which links to these pages, so is this a bad option? **301 redirect: **What if I just 301 redirect all these pages back to the homepage? Is this an easy answer? Part of the problem with this solution is that I'm not sure if it's permanent, but even more importantly is that currently 80% of the site is made up of blog and gallery pages and I think it would be strange to have the vast majority of the site 301 redirecting to the home page. What do you think? DELETE PAGES: Maybe I could just delete all the pages. This will keep the pages from taking link juice and will deindex, but I think there's quite a few internal links to these pages. How would you find all the internal links that point to these pages. There's hundreds of them.
Technical SEO | | Santaur0 -
Name Servers & SEO
We have decided to create a few blogs and will eventually be linking to some of our clients. I have domain privacy and different class C addresses for each of my domains. But the name servers area all the same. Ex: If we create an article for one client on all 5 blogs, will the name servers be a problem?
Technical SEO | | waqid0 -
404 vs. 200?
Is it better to have an error page return a 404 or 200? If I change it to 200, will I still be able to see reports of 404's and/ or broken links? Is there a valid SEO reason that Google would have for not wanting error pages to return 200? In other words, is there any SEO reason to absolutely change it to return a 404? I would rather let it return 200 if no priority reason to change. [title edited by staff to provide clarity]
Technical SEO | | cindyt-170380