undefined
Skip to content
Moz logo Menu open Menu close
  • Products
    • Moz Pro
    • Moz Pro Home
    • Moz Local
    • Moz Local Home
    • STAT
    • Moz API
    • Moz API Home
    • Compare SEO Products
    • Moz Data
  • Free SEO Tools
    • Domain Analysis
    • Keyword Explorer
    • Link Explorer
    • Competitive Research
    • MozBar
    • More Free SEO Tools
  • Learn SEO
    • Beginner's Guide to SEO
    • SEO Learning Center
    • Moz Academy
    • MozCon
    • Webinars, Whitepapers, & Guides
  • Blog
  • Why Moz
    • Digital Marketers
    • Agency Solutions
    • Enterprise Solutions
    • Small Business Solutions
    • The Moz Story
    • New Releases
  • Log in
  • Log out
  • Products
    • Moz Pro

      Your all-in-one suite of SEO essentials.

    • Moz Local

      Raise your local SEO visibility with complete local SEO management.

    • STAT

      SERP tracking and analytics for enterprise SEO experts.

    • Moz API

      Power your SEO with our index of over 44 trillion links.

    • Compare SEO Products

      See which Moz SEO solution best meets your business needs.

    • Moz Data

      Power your SEO strategy & AI models with custom data solutions.

    Let your business shine with Listings AI
    Moz Local

    Let your business shine with Listings AI

    Learn more
  • Free SEO Tools
    • Domain Analysis

      Get top competitive SEO metrics like DA, top pages and more.

    • Keyword Explorer

      Find traffic-driving keywords with our 1.25 billion+ keyword index.

    • Link Explorer

      Explore over 40 trillion links for powerful backlink data.

    • Competitive Research

      Uncover valuable insights on your organic search competitors.

    • MozBar

      See top SEO metrics for free as you browse the web.

    • More Free SEO Tools

      Explore all the free SEO tools Moz has to offer.

    NEW Keyword Suggestions by Topic
    Moz Pro

    NEW Keyword Suggestions by Topic

    Learn more
  • Learn SEO
    • Beginner's Guide to SEO

      The #1 most popular introduction to SEO, trusted by millions.

    • SEO Learning Center

      Broaden your knowledge with SEO resources for all skill levels.

    • On-Demand Webinars

      Learn modern SEO best practices from industry experts.

    • How-To Guides

      Step-by-step guides to search success from the authority on SEO.

    • Moz Academy

      Upskill and get certified with on-demand courses & certifications.

    • MozCon

      Save on Early Bird tickets and join us in London or New York City

    Unlock flexible pricing & new endpoints
    Moz API

    Unlock flexible pricing & new endpoints

    Find your plan
  • Blog
  • Why Moz
    • Digital Marketers

      Simplify SEO tasks to save time and grow your traffic.

    • Small Business Solutions

      Uncover insights to make smarter marketing decisions in less time.

    • Agency Solutions

      Earn & keep valuable clients with unparalleled data & insights.

    • Enterprise Solutions

      Gain a competitive edge in the ever-changing world of search.

    • The Moz Story

      Moz was the first & remains the most trusted SEO company.

    • New Releases

      Get the scoop on the latest and greatest from Moz.

    Surface actionable competitive intel
    New Feature

    Surface actionable competitive intel

    Learn More
  • Log in
    • Moz Pro
    • Moz Local
    • Moz Local Dashboard
    • Moz API
    • Moz API Dashboard
    • Moz Academy
  • Avatar
    • Moz Home
    • Notifications
    • Account & Billing
    • Manage Users
    • Community Profile
    • My Q&A
    • My Videos
    • Log Out

The Moz Q&A Forum

  • Forum
  • Questions
  • Users
  • Ask the Community

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

  1. Home
  2. SEO Tactics
  3. Intermediate & Advanced SEO
  4. Robots.txt: how to exclude sub-directories correctly?

Moz Q&A is closed.

After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Robots.txt: how to exclude sub-directories correctly?

Intermediate & Advanced SEO
3
10
53.1k
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as question
Log in to reply
This topic has been deleted. Only users with question management privileges can see it.
  • fablau
    fablau last edited by Dec 13, 2013, 3:42 PM

    Hello here,

    I am trying to figure out the correct way to tell SEs to crawls this:

    http://www.mysite.com/directory/

    But not this:

    http://www.mysite.com/directory/sub-directory/

    or this:

    http://www.mysite.com/directory/sub-directory2/sub-directory/...

    But with the fact I have thousands of sub-directories with almost infinite combinations, I can't put the following definitions in a manageable way:

    disallow: /directory/sub-directory/

    disallow: /directory/sub-directory2/

    disallow: /directory/sub-directory/sub-directory/

    disallow: /directory/sub-directory2/subdirectory/

    etc...

    I would end up having thousands of definitions to disallow all the possible sub-directory combinations.

    So, is the following way a correct, better and shorter way to define what I want above:

    allow: /directory/$

    disallow: /directory/*

    Would the above work?

    Any thoughts are very welcome! Thank you in advance.

    Best,

    Fab.

    1 Reply Last reply Reply Quote 1
    • MickEdwards
      MickEdwards @sjunaidali last edited by Nov 10, 2017, 5:46 AM Nov 10, 2017, 5:46 AM

      I mentioned both.  You add a meta robots to noindex and remove from the sitemap.

      1 Reply Last reply Reply Quote 0
      • sjunaidali
        sjunaidali @MickEdwards last edited by Nov 10, 2017, 5:13 AM Nov 10, 2017, 5:13 AM

        But google is still free to index a link/page even if it is not included in xml sitemap.

        MickEdwards 1 Reply Last reply Nov 10, 2017, 5:46 AM Reply Quote 0
        • MickEdwards
          MickEdwards @sjunaidali last edited by Nov 9, 2017, 12:34 PM Nov 9, 2017, 12:34 PM

          Install Yoast Wordpress SEO plugin and use that to restrict what is indexed and what is allowed in a sitemap.

          sjunaidali 1 Reply Last reply Nov 10, 2017, 5:13 AM Reply Quote 1
          • sjunaidali
            sjunaidali @MickEdwards last edited by Nov 9, 2017, 11:54 AM Nov 9, 2017, 11:54 AM

            I am using wordpress, Enfold theme (themeforest).

            I want some files to be accessed by google, but those should not be indexed.

            Here is an example: http://prntscr.com/h8918o

            I have currently blocked some JS directories/files using robots.txt (check screenshot)

            But due to this I am not able to pass Mobile Friendly Test on Google: http://prntscr.com/h8925z (check screenshot)

            Is its possible to allow access, but use a tag like noindex in the robots.txt file. Or is there any other way out.

            MickEdwards 1 Reply Last reply Nov 9, 2017, 12:34 PM Reply Quote 0
            • topic:timeago_earlier,4 years
            • fablau
              fablau last edited by Apr 11, 2019, 3:24 PM Dec 16, 2013, 7:25 PM

              Yes, everything looks good, Webmaster Tools gave me the expected results with the following directives:

              allow: /directory/$

              disallow: /directory/*

              Which allows this URL:

              http://www.mysite.com/directory/

              But doesn't allow the following one:

              http://www.mysite.com/directory/sub-directory2/...

              This page also gives an update similar to mine:

              https://support.google.com/webmasters/answer/156449?hl=en

              I think I am good! Thanks 🙂

              1 Reply Last reply Reply Quote 2
              • fablau
                fablau last edited by Dec 16, 2013, 3:46 PM Dec 16, 2013, 3:46 PM

                Thank you Michael, it is my understanding then that my idea of doing this:

                allow: /directory/$

                disallow: /directory/*

                Should work just fine. I will test it within Google Webmaster Tools, and let you know if any problems arise.

                In the meantime if anyone else has more ideas about all this and can confirm me that would be great!

                Thank you again.

                1 Reply Last reply Reply Quote 1
                • MickEdwards
                  MickEdwards @fablau last edited by Dec 16, 2013, 7:26 PM Dec 14, 2013, 5:08 AM

                  I've always stuck to Disallow and followed -

                  "This is currently a bit awkward, as there is no "Allow" field. The easy way is to put all files to be disallowed into a separate directory, say "stuff", and leave the one file in the level above this directory:"

                  http://www.robotstxt.org/robotstxt.html

                  From https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt this seems contradictory

                  | /* | equivalent to / | equivalent to / | Equivalent to "/" -- the trailing wildcard is ignored. |

                  I think this post will be very useful  for you - http://moz.com/community/q/allow-or-disallow-first-in-robots-txt

                  1 Reply Last reply Reply Quote 1
                  • fablau
                    fablau @MickEdwards last edited by Dec 13, 2013, 7:05 PM Dec 13, 2013, 7:05 PM

                    Thank you Michael,

                    Google and other SEs actually recognize the "allow:" command:

                    https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt

                    The fact is: if I don't specify that, how can I be sure that the following single command:

                    disallow: /directory/*

                    Doesn't prevent SEs to spider the /directory/ index as I'd like to?

                    MickEdwards 1 Reply Last reply Dec 14, 2013, 5:08 AM Reply Quote 0
                    • MickEdwards
                      MickEdwards last edited by Dec 13, 2013, 4:59 PM Dec 13, 2013, 4:58 PM

                      As long as you dont have directories somewhere in /* that you want indexed then I think that will work.  There is no allow so you don't need the first line just

                      disallow: /directory/*

                      You can test out here- https://support.google.com/webmasters/answer/156449?rd=1

                      fablau sjunaidali 2 Replies Last reply Nov 9, 2017, 11:54 AM Reply Quote 0
                      • 1 / 1
                      1 out of 10
                      • First post
                        1/10
                        Last post

                      Got a burning SEO question?

                      Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


                      Start my free trial


                      Browse Questions

                      Explore more categories

                      • Moz Tools

                        Chat with the community about the Moz tools.

                      • SEO Tactics

                        Discuss the SEO process with fellow marketers

                      • Community

                        Discuss industry events, jobs, and news!

                      • Digital Marketing

                        Chat about tactics outside of SEO

                      • Research & Trends

                        Dive into research and trends in the search industry.

                      • Support

                        Connect on product support and feature requests.

                      • See all categories

                      Related Questions

                      • Mat_C

                        Robots.txt blocked internal resources Wordpress

                        Hi all, We've recently migrated a Wordpress website from staging to live, but the robots.txt was deleted.  I've created the following new one: User-agent: *
                        Allow: /
                        Disallow: /wp-admin/
                        Disallow: /wp-includes/
                        Disallow: /wp-content/plugins/
                        Disallow: /wp-content/cache/
                        Disallow: /wp-content/themes/
                        Allow: /wp-admin/admin-ajax.php However, in the site audit on SemRush,  I now get the mention that a lot of pages have issues with blocked internal resources in robots.txt file. These blocked internal resources are all cached and minified css elements: links, images and scripts. Does this mean that Google won't crawl some parts of these pages with blocked resources correctly and thus won't be able to follow these links and index the images? In other words, is this any cause for concern regarding SEO? Of course I can change the robots.txt again, but will urls like https://example.com/wp-content/cache/minify/df983.js end up in the index? Thanks for your thoughts!

                        Intermediate & Advanced SEO | Nov 26, 2019, 5:09 AM | Mat_C
                        2
                      • Dan-Louis

                        URL Structure & Best Practice when Facing 4+ Sub-levels

                        Hi. I've spent the last day fiddling with the setup of a new URL structure for a site, and I can't "pull the trigger" on it. Example: - domain.com/games/type-of-game/provider-name/name-of-game/ Specific example: - arcade.com/games/pinball/deckerballs/starshooter2k/ The example is a good description of the content that I have to organize. The aim is to a) define url structure, b) facilitate good ux, **c) **create a good starting point for content marketing and SEO, avoiding multiple / stuffing keywords in urls'. The problem? Not all providers have the same type of game. Meaning, that once I get past the /type-of-game/, I must write a new category / page / content for /provider-name/. No matter how I switch the different "sub-levels" around in the url, at one point, the provider-name doesn't fit as its in need of new content, multiple times. The solution? I can skip "provider-name". The caveat though is that I lose out on ranking for provider keywords as I don't have a cornerstone content page for them. Question: Using the URL structure as outlined above in WordPress, would you A) go with "Pages", or B) use "Posts"

                        Intermediate & Advanced SEO | Jun 25, 2018, 5:17 AM | Dan-Louis
                        0
                      • JAR897

                        Keywords in URL: sub-directory or single layer keywords?

                        Hi guys, im putting together a proposal for a new site and trying to figure out if it'd be better to (A) have a keyword split across multiple directories or duplicate keywords to have the keyword hyphenated? For example, for the topic of "Christmas decor" would you use; (A) - www.domain.com/Christmas/Decor (B) - www.domain.com/Christmas/Christmas-Decor in example B the phrase 'Christmas' is duplicated which looks a little spammy, but the key term "Christmas decor" is in the URL without being broken up by directories. which is stronger? Any advice welcome! Thanks guys!

                        Intermediate & Advanced SEO | Apr 8, 2016, 4:48 PM | JAR897
                        1
                      • digitalcrc

                        Does it hurt your SEO to have an inaccessible directory in your site structure?

                        Due to CMS constraints, there may be some nodes in our site tree that are inaccessible and will automatically redirect to their parent folder. Here's an example: www.site.com/folder1/folder2/content, /folder2 redirects to /folder1. This would only be for the single URL itself, not the subpages (i.e. /folder1/folder2/content and anything below that would be accessible). Is there any real risk in this approach from a technical SEO perspective? I'm thinking this is likely a non-issue but I'm hoping someone with more experience can confirm. Another potential option is to have /folder2 accessible (it would be 100% identical to /folder1, long story) and use a canonical tag to point back to /folder1. I'm still waiting to hear if this is possible. Thanks in advance!

                        Intermediate & Advanced SEO | Aug 25, 2015, 11:24 AM | digitalcrc
                        0
                      • cos2030

                        How can I get Bing to index my subdomain correctly?

                        Hi guys, My website exists on a subdomain (i.e. https://website.subdomain.com) and is being indexed correctly on all search engines except Bing and Duck Duck Go, which list 'https://www.website.subdomain.com'. Unfortunately my subdomain isn't configured for www (the domain is out of my control), so searchers are seeing a server error when clicking on my homepage in the SERPs. I have verified the site successfully in Bing Webmaster Tools, but it still shows up incorrectly. Does anyone have any advice on how I could fix this issue? Thank you!

                        Intermediate & Advanced SEO | Oct 2, 2015, 3:15 AM | cos2030
                        0
                      • EvansHunt

                        Wildcarding Robots.txt for Particular Word in URL

                        Hey All, So I know that this isn't a standard robots.txt, I'm aware of how to block or wildcard certain folders but I'm wondering whether it's possible to block all URL's with a certain word in it? We have a client that was hacked a year ago and now they want us to help remove some of the pages that were being autogenerated with the word "viagra" in it. I saw this article and tried implementing it https://builtvisible.com/wildcards-in-robots-txt/ and it seems that I've been able to remove some of the URL's (although I can't confirm yet until I do a full pull of the SERPs on the domain). However, when I test certain URL's inside of WMT it still says that they are allowed which makes me think that it's not working fully or working at all. In this case these are the lines I've added to the robots.txt Disallow: /*&viagra Disallow: /*&Viagra I know I have the solution of individually requesting URL's to be removed from the index but I want to see if anybody has every had success with wildcarding URL's with a certain word in their robots.txt? The individual URL route could be very tedious. Thanks! Jon

                        Intermediate & Advanced SEO | Jul 29, 2015, 1:24 PM | EvansHunt
                        0
                      • browndoginteractive

                        Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)

                        Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
                        2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality:  http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results:  Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index:  robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages.  I say "force" because of the crawl budget required.  Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links.  Best of both worlds:  crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution:  using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.

                        Intermediate & Advanced SEO | Jan 30, 2014, 8:19 PM | browndoginteractive
                        0
                      • RikkiD22

                        Recovering from robots.txt error

                        Hello, A client of mine is going through a bit of a crisis. A developer (at their end) added Disallow: / to the robots.txt file. Luckily the SEOMoz crawl ran a couple of days after this happened and alerted me to the error. The robots.txt file was quickly updated but the client has found the vast majority of their rankings have gone. It took a further 5 days for GWMT to file that the robots.txt file had been updated and since then we have "Fetched as Google" and "Submitted URL and linked pages" in GWMT. In GWMT it is still showing that that vast majority of pages are blocked in the "Blocked URLs" section, although the robots.txt file below it is now ok. I guess what I want to ask is: What else is there that we can do to recover these rankings quickly? What time scales can we expect for recovery? More importantly has anyone had any experience with this sort of situation and is full recovery normal? Thanks in advance!

                        Intermediate & Advanced SEO | Jan 5, 2014, 4:33 PM | RikkiD22
                        0

                      Get started with Moz Pro!

                      Unlock the power of advanced SEO tools and data-driven insights.

                      Start my free trial
                      Products
                      • Moz Pro
                      • Moz Local
                      • Moz API
                      • Moz Data
                      • STAT
                      • Product Updates
                      Moz Solutions
                      • SMB Solutions
                      • Agency Solutions
                      • Enterprise Solutions
                      • Digital Marketers
                      Free SEO Tools
                      • Domain Authority Checker
                      • Link Explorer
                      • Keyword Explorer
                      • Competitive Research
                      • Brand Authority Checker
                      • Local Citation Checker
                      • MozBar Extension
                      • MozCast
                      Resources
                      • Blog
                      • SEO Learning Center
                      • Help Hub
                      • Beginner's Guide to SEO
                      • How-to Guides
                      • Moz Academy
                      • API Docs
                      About Moz
                      • About
                      • Team
                      • Careers
                      • Contact
                      Why Moz
                      • Case Studies
                      • Testimonials
                      Get Involved
                      • Become an Affiliate
                      • MozCon
                      • Webinars
                      • Practical Marketer Series
                      • MozPod
                      Connect with us

                      Contact the Help team

                      Join our newsletter
                      Moz logo
                      © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
                      • Accessibility
                      • Terms of Use
                      • Privacy

                      Looks like your connection to Moz was lost, please wait while we try to reconnect.