Robots.txt question
-
Hello,
What does the following command mean -
User-agent: * Allow: /
Does it mean that we are blocking all spiders ? Is Allow supported in robots.txt ?
Thanks
-
It's a good idea to have an xml site map and make sure the search engines know where it is. It's part of the protocol that they will look in the robots.txt file for the location for your sitemap.
-
I was assuming that by including / after allow, we are blocking the spiders and also thought that allow is not supported by search engines.
Thanks for clarifications. A better approach would be
User-Agent: * Allow:
right ?
The best one of course is
**User-agent: * Disallow:**
-
That's not really necessary unless there URLs or directories you're disallowing after the allow in your robots.txt. Allow is a directive supported by major search engines, but search engines assume they're allowed to crawl everything they find unless you disallow it specifically in your robots.txt.
The following is universally accepted by bots and essentially means the same thing as what I think you're trying to say, allowing bots to crawl everything:
User-agent: * Disallow:
There's a sample use of the Allow directive on the wikipedia robots.txt page here.
-
There's more information about robots.txt from SEOmoz at http://www.seomoz.org/learn-seo/robotstxt
SEOmoz and the robots.txt site suggest the following for allowing robots to see everying and list your sitemap:
User-agent: *
Disallow:Sitemap: http://www.example.com/none-standard-location/sitemap.xml
-
Any particular reason for doing so ?
-
That robots txt should be fine.
But you should also add your XML sitemap to the robots.txt file, example:
User-Agent: * Allow: / Sitemap: http://www.website.com/sitemap.xml
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Some questions about URL structure and multi country website
Gajanand angela dayHi,
Technical SEO | | Shahjahaaan
I have a question from SEO experts and web developers.
I want to setup a job website for 5 countries. for each country i will provide daily jobs listing on the basis of
1. jobs by categories - for example : accounting jobs. IT jobs, Sales jobs
2. jobs by city - for example : jobs in boston, jobs in chicago
3. jobs by companies for example : jobs in facebook, jobs in emirates case :
a company name " emirates " located in "boston" having vacancy of "accounting job " having position of full time this case job will be present in following categories . 1. accounting jobs in boston
2. jobs in boston
3. jobs in emirates and open any above option there will be filter box on left side showing
position i.e full time
salary i.e 1000-1500
location i.e boston,chicago Q.1
i want to know when user search on google these terms "accounting jobs in boston " or "jobs in boston" or "jobs in emirates" same job will display which url structure is recommended in for each search term? Q.2 how we can do on page SEO for these terms because jobs listing will be changing daily because of new jobs addition and content is changing not Q.3 should i create website on separate domains for each country or same domain but with different folders in it
.co.uk or com/uk for UK and .ae OR .com/uae for UAE Note : i will also attach blog on it and each blog will focus on specific country knowledge for example for USA , how to find jobs in new york and for UAE how to find jobs in Dubai etc . Thanks in Advance0 -
Subdomain Ranking Question
Hi All - Quick question that I think I know the answer to, but I feel like I've been going around in circles a bit. My client is launching a new product and wants us to build a microsite for it (product.clientname.com). My client really dislikes their brand website, and wants to use paid media to push their audience to this new microsite. However, they also said want it to rank well organically. I feel uneasy about this, because of the subdomain vs. subfolder argument. I believe that the product will also be listed/featured on their main brand website. What is the best way forward? Thanks!
Technical SEO | | AinsleyAgency0 -
301 Redirect Question
I am working with a website and I ran a Screaming Frog and noticed there are 4,600 301's on the website (www.srishoes.com). It seems like the issue is between the www. and without it and they aren't working together. Is this something that the website provider should update and what type of impact might this have on the site? Thanks!
Technical SEO | | ReunionMarketing
Matt0 -
Bing rankings question
Hi, We just wrapped up a website redesign about a month ago. The content stayed primarily the same. Once we launched the new site all of our rankings in Google stayed the same but we lost rank for all competitive keywords on Bing. I looked in Bing Webmaster tools and it doesn't show any penalties but it does show that we have too many H1 tags. I don't think the H1 tag thing is the issue but maybe. Do you know what could be causing this?
Technical SEO | | BT20090 -
Blocked URL's by robots.txt
In Google Webmaster Tools shows me 10,936 Blocked URL's by robots.txt and it is very strange when you go to the "Index Status" section where shows that since April 2012 robots.txt blocked many URL's. You can see more precise on the image attached (chart WMT) I can not explain why I have blocked URL's ? because I have nothing in robots.txt.
Technical SEO | | meralucian37
My robots.txt is like this: User-agent: * I thought I was penalized by Penguin in April 2012 because constantly i'am losing visitors now reaching over 40%. It may be a different penalty? Any help is welcome because i'm already so saturated. Mera robotstxt.jpg0 -
Google insists robots.txt is blocking... but it isn't.
I recently launched a new website. During development, I'd enabled the option in WordPress to prevent search engines from indexing the site. When the site went public (over 24 hours ago), I cleared that option. At that point, I added a specific robots.txt file that only disallowed a couple directories of files. You can view the robots.txt at http://photogeardeals.com/robots.txt Google (via Webmaster tools) is insisting that my robots.txt file contains a "Disallow: /" on line 2 and that it's preventing Google from indexing the site and preventing me from submitting a sitemap. These errors are showing both in the sitemap section of Webmaster tools as well as the Blocked URLs section. Bing's webmaster tools are able to read the site and sitemap just fine. Any idea why Google insists I'm disallowing everything even after telling it to re-fetch?
Technical SEO | | ahockley0 -
How ro write a robots txt file to point to your site map
Good afternoon from still wet & humid wetherby UK... I want to write a robots text file that instruct the bots to index everything and give a specific location to the sitemap. The sitemap url is:http://business.leedscityregion.gov.uk/CMSPages/GoogleSiteMap.aspx Is this correct: User-agent: *
Technical SEO | | Nightwing
Disallow:
SITEMAP: http://business.leedscityregion.gov.uk/CMSPages/GoogleSiteMap.aspx Any insight welcome 🙂0 -
Robots.txt and canonical tag
In the SEOmoz post - http://www.seomoz.org/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts, it's being said - If you have a robots.txt disallow in place for a page, the canonical tag will never be seen. Does it so happen that if a page is disallowed by robots.txt, spiders DO NOT read the html code ?
Technical SEO | | seoug_20050