undefined
Skip to content
Moz logo Menu open Menu close
  • Products
    • Moz Pro
    • Moz Pro Home
    • Moz Local
    • Moz Local Home
    • STAT
    • Moz API
    • Moz API Home
    • Compare SEO Products
    • Moz Data
  • Free SEO Tools
    • Domain Analysis
    • Keyword Explorer
    • Link Explorer
    • Competitive Research
    • MozBar
    • More Free SEO Tools
  • Learn SEO
    • Beginner's Guide to SEO
    • SEO Learning Center
    • Moz Academy
    • SEO Q&A
    • Webinars, Whitepapers, & Guides
  • Blog
  • Why Moz
    • Agency Solutions
    • Enterprise Solutions
    • Small Business Solutions
    • Case Studies
    • The Moz Story
    • New Releases
  • Log in
  • Log out
  • Products
    • Moz Pro

      Your all-in-one suite of SEO essentials.

    • Moz Local

      Raise your local SEO visibility with complete local SEO management.

    • STAT

      SERP tracking and analytics for enterprise SEO experts.

    • Moz API

      Power your SEO with our index of over 44 trillion links.

    • Compare SEO Products

      See which Moz SEO solution best meets your business needs.

    • Moz Data

      Power your SEO strategy & AI models with custom data solutions.

    NEW Keyword Suggestions by Topic
    Moz Pro

    NEW Keyword Suggestions by Topic

    Learn more
  • Free SEO Tools
    • Domain Analysis

      Get top competitive SEO metrics like DA, top pages and more.

    • Keyword Explorer

      Find traffic-driving keywords with our 1.25 billion+ keyword index.

    • Link Explorer

      Explore over 40 trillion links for powerful backlink data.

    • Competitive Research

      Uncover valuable insights on your organic search competitors.

    • MozBar

      See top SEO metrics for free as you browse the web.

    • More Free SEO Tools

      Explore all the free SEO tools Moz has to offer.

    NEW Keyword Suggestions by Topic
    Moz Pro

    NEW Keyword Suggestions by Topic

    Learn more
  • Learn SEO
    • Beginner's Guide to SEO

      The #1 most popular introduction to SEO, trusted by millions.

    • SEO Learning Center

      Broaden your knowledge with SEO resources for all skill levels.

    • On-Demand Webinars

      Learn modern SEO best practices from industry experts.

    • How-To Guides

      Step-by-step guides to search success from the authority on SEO.

    • Moz Academy

      Upskill and get certified with on-demand courses & certifications.

    • SEO Q&A

      Insights & discussions from an SEO community of 500,000+.

    Unlock flexible pricing & new endpoints
    Moz API

    Unlock flexible pricing & new endpoints

    Find your plan
  • Blog
  • Why Moz
    • Small Business Solutions

      Uncover insights to make smarter marketing decisions in less time.

    • Agency Solutions

      Earn & keep valuable clients with unparalleled data & insights.

    • Enterprise Solutions

      Gain a competitive edge in the ever-changing world of search.

    • The Moz Story

      Moz was the first & remains the most trusted SEO company.

    • Case Studies

      Explore how Moz drives ROI with a proven track record of success.

    • New Releases

      Get the scoop on the latest and greatest from Moz.

    Surface actionable competitive intel
    New Feature

    Surface actionable competitive intel

    Learn More
  • Log in
    • Moz Pro
    • Moz Local
    • Moz Local Dashboard
    • Moz API
    • Moz API Dashboard
    • Moz Academy
  • Avatar
    • Moz Home
    • Notifications
    • Account & Billing
    • Manage Users
    • Community Profile
    • My Q&A
    • My Videos
    • Log Out

The Moz Q&A Forum

  • Forum
  • Questions
  • Users
  • Ask the Community

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

  1. Home
  2. SEO Tactics
  3. Technical SEO
  4. Allow or Disallow First in Robots.txt

Moz Q&A is closed.

After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Allow or Disallow First in Robots.txt

Technical SEO
7
12
30.8k
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as question
Log in to reply
This topic has been deleted. Only users with question management privileges can see it.
  • irvingw
    irvingw last edited by May 4, 2012, 12:35 PM

    If I want to override a Disallow directive in robots.txt with an Allow command, do I have the Allow command before or after the Disallow command?

    example:

    Allow: /models/ford///page*

    Disallow: /models////page

    1 Reply Last reply Reply Quote 0
    • Net66SEO
      Net66SEO last edited by Feb 10, 2015, 7:53 AM Feb 10, 2015, 7:53 AM

      Just caught this a bit late and probably to late to add something but my two pence is test it in Webmaster Tools, via Crawl -> Robot.txt tester - if you've not used this before simply add the url you want to test and Google highlights the directive that allows or disallows it.

      1 Reply Last reply Reply Quote 0
      • topic:timeago_earlier,about a year
      • fablau
        fablau @Cyrus-Shepard last edited by Dec 16, 2013, 6:00 PM Dec 16, 2013, 6:00 PM

        Thank you Cyrus, yes, I have tried your suggested robots.txt checker and despite it validates the file, it shows me a couple of warnings about the "unusual" use of wildcard. It is my understanding that I would probably need to discuss all this with Google folks directly.

        Thank you for you answer... and, yes Keri, I know this is a old thread, but still useful today!

        Thanks 🙂

        1 Reply Last reply Reply Quote 0
        • Cyrus-Shepard
          Cyrus-Shepard @fablau last edited by Dec 16, 2013, 4:17 PM Dec 16, 2013, 4:17 PM

          Can't say with 100% confidence, but sounds like it might work. You could always upload it to a server and use a robots.txt checker to validate, although sometimes the validator tools may incorporate slight differences in edge cases like this that make them moot.

          fablau 1 Reply Last reply Dec 16, 2013, 6:00 PM Reply Quote 1
          • KeriMorgret
            KeriMorgret @fablau last edited by Dec 16, 2013, 4:02 PM Dec 16, 2013, 4:02 PM

            Just a quick note, this question is actually from spring of 2012.

            1 Reply Last reply Reply Quote 0
            • fablau
              fablau last edited by Dec 16, 2013, 3:53 PM Dec 16, 2013, 3:53 PM

              What about something like:

              allow: /directory/$

              disallow: /directory/*

              Where I want this to be indexed:

              http://www.mysite.com/directory/

              But not this:

              http://www.mysite.com/directory/sub-directory/

              Ideas?

              KeriMorgret Cyrus-Shepard 2 Replies Last reply Dec 16, 2013, 4:17 PM Reply Quote 0
              • topic:timeago_earlier,2 years
              • irvingw
                irvingw @Cyrus-Shepard last edited by May 6, 2012, 9:18 AM May 6, 2012, 9:18 AM

                I really appreciate all that effort you put in to ensure your method was correct. many thanks.

                1 Reply Last reply Reply Quote 0
                • Cyrus-Shepard
                  Cyrus-Shepard last edited by May 6, 2012, 9:18 AM May 6, 2012, 3:05 AM

                  Interesting question - I've had this discussion a couple of times with different SEOs. Here's my best understanding: There are actually 2 different answers - one if you are talking about Google, and one for every other search engine.

                  For most search engines, the "Allow" should come first. This is because the first matching pattern always wins, for the reasons Geoff stated.

                  But Google is different. They state:

                  "At a group-member level, in particular for allow and disallow directives, the most specific rule based on the length of the [path] entry will trump the less specific (shorter) rule. The order of precedence for rules with wildcards is undefined."

                  Robots.txt Specifications - Webmasters — Google Developers

                  So for Google, order is not important, only the specificity of the rule based on the length of the entry. But the order of precedence for rules with wildcards is undefined.

                  This last part is important, because your directives contain wildcards. If I'm reading this right, your particular directives:

                  Allow: /models/ford///page*

                  Disallow: /models////pageSo if it's "undefined" which directive will Google follow, if order isn't important? Fortunately, there's a simple way to find out.Google Webmaster allows you to test any robots.txt file. I created a dummy file based on your rules, In this case, your directives worked perfectly no matter what order I put them in.

                  | http://cyrusshepard.com/models/ford/test/test/pages | Allowed by line 2: Allow: /models/ford///page* | Allowed by line 2: Allow: /models/ford///page* |
                  | http://cyrusshepard.com/models/chevy/test/test/pages | Blocked by line 3: Disallow: /models////page | Blocked by line 3: Disallow: /models////page |

                  So, to summarize:1. Always put Allow directives first, as most search engines follow the "first rule counts" rule.2. Google doesn't care about order, but rather the specificity based on the length of the entry.3. The order of precedence for rules with wildcards is undefined.4. When in doubt, check your robots.txt file in Google Webmaster tools.Hope this helps.(sorry for the very long answer which basically says you were right all along 🙂

                  irvingw 1 Reply Last reply May 6, 2012, 9:18 AM Reply Quote 3
                  • NakulGoyal
                    NakulGoyal @irvingw last edited by May 4, 2012, 1:40 PM May 4, 2012, 1:38 PM

                    I understand your concern. I am basing my answer based on the fact that if you don't have a robots.txt at all, Google will still crawl you, which means its an allow by default. So all that matters in my opinion is the disallow, but because you need an allow from the wildcard disallow, you could allow that and disallow next.

                    Honestly, I don't think it matters. If you think the way a bot would work, it's not like robots.txt 1 line is read, then the bot goes crawling and then comes back reads the next line and so on. Does that make sense ? It reads all the lines in the robots.txt and then follows the directives. But to be sure, you can do either of the scenarios and see for yourself. I am sure the results would be same either way.

                    1 Reply Last reply Reply Quote 1
                    • zigojacko
                      zigojacko last edited by May 4, 2012, 1:40 PM May 4, 2012, 1:34 PM

                      The allow directives need to come before the disallow directives for the same directory/file paths. (I have never personally tested this although it makes logical sense to instruct a robot to access one particular path within a directory structure before it sees that it is blocked from crawling that directory).

                      For example:-

                      Allow: /profiles

                      Disallow: /s2/profiles/me

                      Allow: /s2/profiles

                      Allow: /s2/photos

                      Allow: /s2/static

                      Disallow: /s2

                      As per how Google have formatted their robots.txt.

                      1 Reply Last reply Reply Quote 2
                      • irvingw
                        irvingw @NakulGoyal last edited by May 4, 2012, 1:31 PM May 4, 2012, 1:31 PM

                        Thanks. I want to make sure I get this right in a syntax universally understood by all engines. I have seen webmasters all over the place on this one with some saying that crawlers use a first matching rule and others that say that crawlers use a last matching rule. I am almost thinking to have the allow command twice - before and after, to cover all bases.

                        NakulGoyal 1 Reply Last reply May 4, 2012, 1:38 PM Reply Quote 0
                        • NakulGoyal
                          NakulGoyal last edited by May 4, 2012, 1:21 PM May 4, 2012, 1:21 PM

                          I don't think it matters, but I think I would disallow first, because by default everything is an Allow.

                          irvingw 1 Reply Last reply May 4, 2012, 1:31 PM Reply Quote 0
                          • 1 / 1
                          1 out of 12
                          • First post
                            1/12
                            Last post

                          Got a burning SEO question?

                          Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


                          Start my free trial


                          Browse Questions

                          Explore more categories

                          • Moz Tools

                            Chat with the community about the Moz tools.

                          • SEO Tactics

                            Discuss the SEO process with fellow marketers

                          • Community

                            Discuss industry events, jobs, and news!

                          • Digital Marketing

                            Chat about tactics outside of SEO

                          • Research & Trends

                            Dive into research and trends in the search industry.

                          • Support

                            Connect on product support and feature requests.

                          • See all categories

                          Related Questions

                          • EcommerceSite

                            Do I need to block my cart page in robots.txt?

                            I have a site with woocommerce. Do I need to block the cart page?

                            Technical SEO | Mar 6, 2015, 5:36 PM | EcommerceSite
                            0
                          • hammadrafique

                            Blocked jquery in Robots.txt, Any SEO impact?

                            I've heard that Google is now indexing links and stuff available in javascript and jquery. My webmastertools is showing that some links are blocked in robots.txt of jquery. Sorry I'm not a developer or designer. I want to know is there any impact of this on my SEO? and also how can I unblock it for the robots? Check this screenshot: http://i.imgur.com/3VDWikC.png

                            Technical SEO | Oct 15, 2014, 8:05 AM | hammadrafique
                            0
                          • mkhGT

                            Are robots.txt wildcards still valid? If so, what is the proper syntax for setting this up?

                            I've got several URL's that I need to disallow in my robots.txt file. For example, I've got several documents that I don't want indexed and filters that are getting flagged as duplicate content. Rather than typing in thousands of URL's I was hoping that wildcards were still valid.

                            Technical SEO | Jul 9, 2013, 9:04 PM | mkhGT
                            0
                          • jmueller

                            Two META Robots tags on a page - which will win?

                            Hi, Does anybody know which meta-robots tag will "win" if there is more than one on a page? The situation:
                            our CMS is not very flexible and so we have segments of META-Tags on the page that originate from templates.
                            Now any author can add any meta-tag from within his article-editor.
                            The logic delivering the pages does not care if there might be more than one meta-robots tag present (one from template, one from within the article). Now we could end up with something like this: Which one will be regarded by google & co?
                            First?
                            Last?
                            None? Thanks a lot,
                            Jan

                            Technical SEO | Jun 5, 2013, 11:25 AM | jmueller
                            0
                          • Webmaster123

                            I accidentally blocked Google with Robots.txt. What next?

                            Last week I uploaded my site and forgot to remove the robots.txt file with this text: User-agent: * Disallow: / I dropped from page 11 on my main keywords to past page 50. I caught it 2-3 days later and have now fixed it. I re-imported my site map with Webmaster Tools and I also did a Fetch as Google through Webmaster Tools. I tweeted out my URL to hopefully get Google to crawl it faster too. Webmaster Tools no longer says that the site is experiencing outages, but when I look at my blocked URLs it still says 249 are blocked. That's actually gone up since I made the fix. In the Google search results, it still no longer has my page title and the description still says "A description for this result is not available because of this site's robots.txt – learn more." How will this affect me long-term? When will I recover my rankings? Is there anything else I can do? Thanks for your input! www.decalsforthewall.com

                            Technical SEO | Nov 28, 2012, 6:44 AM | Webmaster123
                            0
                          • MirandaP

                            How to allow googlebot past paywall

                            Does anyone know of any ways or ideas to allow Google/Bing etc. to index your content, but have it behind a paywall for users?

                            Technical SEO | May 1, 2012, 5:15 AM | MirandaP
                            0
                          • JohannCR

                            Internal search : rel=canonical vs noindex vs robots.txt

                            Hi everyone, I have a website with a lot of internal search results pages indexed. I'm not asking if they should be indexed or not, I  know they should not according to Google's guidelines. And they make a bunch of duplicated pages so I want to solve this problem. The thing is, if I noindex them, the site is gonna lose a non-negligible chunk of traffic : nearly 13% according to google analytics !!! I thought of blocking them in robots.txt. This solution would not keep them out of the index. But the pages appearing in GG SERPS would then look empty (no title, no description), thus their CTR would plummet and I would lose a bit of traffic too... The last idea I had was to use a rel=canonical tag pointing to the original search page (that is empty, without results), but it would probably have the same effect as noindexing them, wouldn't it ? (never tried so I'm not sure of this) Of course I did some research on the subject, but each of my finding recommanded one of the 3 methods only ! One even recommanded noindex+robots.txt block which is stupid because the noindex would then be useless... Is there somebody who can tell me which option is the best to keep this traffic ? Thanks a million

                            Technical SEO | Apr 13, 2012, 7:13 PM | JohannCR
                            0
                          • kchandler

                            Robots.txt File Redirects to Home Page

                            I've been doing some site analysis for a new SEO client and it has been brought to my attention that their robots.txt file redirects to their homepage. I was wondering: Is there a benfit to setup your robots.txt file to do this? Will this effect how their site will get indexed? Thanks for your response! Kyle Site URL: http://www.radisphere.net/

                            Technical SEO | Mar 18, 2011, 11:01 AM | kchandler
                            0

                          Get started with Moz Pro!

                          Unlock the power of advanced SEO tools and data-driven insights.

                          Start my free trial
                          Products
                          • Moz Pro
                          • Moz Local
                          • Moz API
                          • Moz Data
                          • STAT
                          • Product Updates
                          Moz Solutions
                          • SMB Solutions
                          • Agency Solutions
                          • Enterprise Solutions
                          Free SEO Tools
                          • Domain Authority Checker
                          • Link Explorer
                          • Keyword Explorer
                          • Competitive Research
                          • Brand Authority Checker
                          • Local Citation Checker
                          • MozBar Extension
                          • MozCast
                          Resources
                          • Blog
                          • SEO Learning Center
                          • Help Hub
                          • Beginner's Guide to SEO
                          • How-to Guides
                          • Moz Academy
                          • API Docs
                          About Moz
                          • About
                          • Team
                          • Careers
                          • Contact
                          Why Moz
                          • Case Studies
                          • Testimonials
                          Get Involved
                          • Become an Affiliate
                          • MozCon
                          • Webinars
                          • Practical Marketer Series
                          • MozPod
                          Connect with us

                          Contact the Help team

                          Join our newsletter
                          Moz logo
                          © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
                          • Accessibility
                          • Terms of Use
                          • Privacy

                          Looks like your connection to Moz was lost, please wait while we try to reconnect.