Is having no robots.txt file the same as having one and allowing all agents?
-
The site I am working on currently has no robots.txt file. However, I have just uploaded a sitemap and would like to point the robots.txt file to it.
Once I upload the robots.txt file, if I allow access to all agents, is this the same as when the site had no robots.txt file at all; do I need to specify crawler access on can the robots.txt file just contain the link to the sitemap?
-
According to me a sitemap is more important than robots.txt as it help a search engine bot in effectively crawling a website. Robots.txt is generally used to request (allow: or disallow:)a crawler not to crawl and index certain section of your website containing sensitive data. This is totally upto the crawler to respect the request by not crawling and indexing that sensitive part. However, it is a general practice among webmasters world wide to have a robots.txt file for each of their sites. A common robots.txt with permission to access the entire website should look like this:
User-agent: *
Disallow:Sitemap: http://www.yoursite.com/sitemap.xml
So if you want some section (folders, directories) of your site not to be crawled by a bot then you can use a robots.txt.
Yes logically its the same like having a robots.txt file granting all the access and not having one completely. Its just a difference between like something having 'by default". Having a robots.txt file doesn't guarantee a rank boost in the SERP. Hope it helps. For more understanding please refer these resources:
Cheers
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Use of multiple keywords that are similar for one local site
Hi I thought that if I wanted to rank a local site for the core Keyword, 'Landscaping Location' that variations of this keyword should be used on the same page. But I recently read that if I wanted to rank for: Landscaping Location
Technical SEO | | CamperConnect14
Landscaping in Location
Landscaping Services in Location that I should use separate page for each term. Is this correct? A small local website will probably only have a few pages and so making up pages solely to go after Keywords can't be right. But then would opportunities be missed? Thanks for your help with this!!0 -
Is it important to include image files in your sitemap?
I run an ecommerce business that has over 4000 product pages which, as you can imagine, branches off into thousands of image files. Is it necessary to include those in my sitemap for faster indexing? Thanks for you help! -Reed
Technical SEO | | IceIcebaby0 -
Handling Multiple Restaurants Under One Domain
We are working with a client that has 2 different restaurants. One has been established since 1938, the other was opened in late 2012. Currently, each site has its own domain name. From a marketing/branding perspective, we would like to make the customers [web visitors] of the established restaurant aware of the sister restaurant. To accomplish this, we are thinking about creating a landing page that links to each restaurant. To do this, we would need to purchase a brand new URL, and then place each restaurant in a separate sub folder of the new URL. The other thought is to have each site accessed from the main new URL [within sub folders] and also point each existing URL to the appropriate sub folder for each restaurant. We know there are some branding and marketing hurdles with this approach that we need to think through/work out. But, we are not sure how this would impact their SEO––and assume it will not be good. Any thoughts on this topic would be greatly appreciated.
Technical SEO | | thinkcreativegroup0 -
Empty Meta Robots Directive - Harmful?
Hi, We had a coding update and a side-effect of that was that our directive was emptied, in other words it now reads as: on all of the site. I've since noticed that Google's cache date on all of the pages - at least, the ones I tested - have a Cached date of no later than 17 December '12 - that's the Monday after the directive was removed on mass. So, A, does anyone have solid evidence of an empty directive causing problems? Past experience, Matt Cutts, Fishkin quote, etc. And then B - It seems fairly well correlated but, does my entire site's homogenous Cached date point to this tag removal? Or is it fairly normal to have a particular cache date across a large site (we're a large ecommerce site). Our site: http://www.zando.co.za/ I'm having the directive reinstated as soon as Dev permitting. And then, for extra credit, is there a way with Google's API, or perhaps some other tool, to run an arbitrary list and retrieve Cached dates? I'd want to do this for diagnosis purposes and preferably in a way that OK with Google. I'd avoid CURLing for the cached URL and scraping out that dates with BASH, or any such kind of thing. Cheers,
Technical SEO | | RocketZando0 -
Any one worked on a sites.google.com/ website ?
Hi I have a client that has a sites.google.com/ website, Has anyone ever used one ? or had to do SEO on one ? Any help would be very much appreciated Thanks
Technical SEO | | tempowebdesign0 -
Kill your htaccess file, take the risk to learn a little
Last week I was browsing Google's index with "site:www.mydomain.com and wanted to scan over to see what Google had indexed with my site. I came across a URL that was mistakenly indexed. It went something like this www.mydomain.com/link1/link2/link1/link4/link3 I didn't understand why Google had indexed a page like that of mine when the "link" pages were links that were on my main bar which were site wide links. It seemed to be looping infinitely over and over. So I started trying to see how many of these Google had indexed and I came across about 20 pages. I went through the process of removing the URL's in Webmaster Tools, but then I wanted to know why it was happening. I had discovered that I had mistakenly placed some links on my site in my header in such a manner link1 link2 link3 If you know HTML you will realize that by not placing the "/" in the front of the link I was telling that page to add that link in addition to the URL that is was currently on. What this did was create an infinite loop of links which is not good 🙂 Basically when Google went to www.mydomain.com/link1/ it found the other links which then told Google to add that url to the existing URL and then go to that link. Something like: www.mydomain.com/links1/link2/... When you do not add the "/" in front of the directory you are linking too it will do this. The "/" refers to the root so if you place that in front of your directory you are linking too it will always assume that first "/" as the root then the url will follow. So what did I do? Even though I was able to find about 20 URL's using the "site:" search method there had to be more out there. Even though I tried to search I was not able to find anymore, but I was not convinced. The light bulb went on at this point My .htaccess file contained many 301 redirects in my attempt to try and redirect those pages to a real page, they were not really relevant pages to redirect too. So how could I really find out what Google had indexed out there for me since Webmaster Tools only reports the top 1000 links. I decided to kill my htaccess file. Knowing that Google is "forgiving" when major changes to your site happen I knew Google would not simply just kill my site for removing my htaccess file immediately. I waited 3 days then BOOM! Webmaster Tools was reporting to me that it found a ton of 401's on my site. I looked at the Crawl Errors and there they were. All those infinite loop links that I knew had to be more out there, I was able to see. How many were there? Google found in the first crawl over 5,000 of them. OMG! Yeah could you imagine the "Low quality" score I was getting on those pages? By seeing all those links I was able to determine about 4 patterns in the links. For example: www.mydomain.com/link1/link2/ www.mydomain.com/link1/link3/ www.mydomain.com/link1/link4/ www.mydomain.com/link1/link5/ Now my issue was I wanted to keep all the URL's that were pointing to www.mydomain.com/link1 but anything after that I needed gone. I went into my Robots.txt file and added this Disallow: www.mydomain.com/link1/link2/ Disallow: www.mydomain.com/link1/link3/ Disallow: www.mydomain.com/link1/link4/ Disallow: www.mydomain.com/link1/link5/ Now there were many more pages indexed that went deeper into those links but I knew I wanted anything after the 2nd URL gone since it was the start of the loop that I detected. With that I was able to have from what I know at least 5k links if not more. What did I learn from this? Kill your htaccess file for a few days and see what comes back in your reports. You might learn something 🙂 After doing this I simply replaced my htaccess file and I am on my way to removing a ton of "low quality" links I didn't even know I had.
Technical SEO | | cbielich0 -
Should I block robots from URLs containing query strings?
I'm about to block off all URLs that have a query string using robots.txt. They're mostly URLs with coremetrics tags and other referrer info. I figured that search engines don't need to see these as they're always better off with the original URL. Might there be any downside to this that I need to consider? Appreciate your help / experiences on this one. Thanks Jenni
Technical SEO | | ShearingsGroup0 -
Am I missing something if I absorb one site into another?
We are absorbing one of our sites into another and I want to make sure I am not missing anything. The site that is being absorbed will no longer exist as all the content has been replicated/duplicated on the main site. About a month ago we added canonicals to all the duplicate content to point to the new site it will be a part of. That went very well and organic traffic continued to flow to those pages on the new site. We recently (yesterday) used 301s on all the pages using mod_rewrites and redirected the domain name from the old site name to the new one. Using mod_rewrites we redirect any other page linking to that domain to www.newsite.com?ref=oldsite.com. So far I dont see anything coming in on an unexpected link. I still need to tell Google via Webmaster Tools that the oldsite is now on newsite.com correct? Is there anything else I might bump into that we havent thought of? Thanks
Technical SEO | | GeorgeLaRochelle0