Skip to content
    Moz logo Menu open Menu close
    • Products
      • Moz Pro
      • Moz Pro Home
      • Moz Local
      • Moz Local Home
      • STAT
      • Moz API
      • Moz API Home
      • Compare SEO Products
      • Moz Data
    • Free SEO Tools
      • Domain Analysis
      • Keyword Explorer
      • Link Explorer
      • Competitive Research
      • MozBar
      • More Free SEO Tools
    • Learn SEO
      • Beginner's Guide to SEO
      • SEO Learning Center
      • Moz Academy
      • MozCon
      • Webinars, Whitepapers, & Guides
    • Blog
    • Why Moz
      • Digital Marketers
      • Agency Solutions
      • Enterprise Solutions
      • Small Business Solutions
      • The Moz Story
      • New Releases
    • Log in
    • Log out
    • Products
      • Moz Pro

        Your all-in-one suite of SEO essentials.

      • Moz Local

        Raise your local SEO visibility with complete local SEO management.

      • STAT

        SERP tracking and analytics for enterprise SEO experts.

      • Moz API

        Power your SEO with our index of over 44 trillion links.

      • Compare SEO Products

        See which Moz SEO solution best meets your business needs.

      • Moz Data

        Power your SEO strategy & AI models with custom data solutions.

      Track AI Overviews in Keyword Research
      Moz Pro

      Track AI Overviews in Keyword Research

      Try it free!
    • Free SEO Tools
      • Domain Analysis

        Get top competitive SEO metrics like DA, top pages and more.

      • Keyword Explorer

        Find traffic-driving keywords with our 1.25 billion+ keyword index.

      • Link Explorer

        Explore over 40 trillion links for powerful backlink data.

      • Competitive Research

        Uncover valuable insights on your organic search competitors.

      • MozBar

        See top SEO metrics for free as you browse the web.

      • More Free SEO Tools

        Explore all the free SEO tools Moz has to offer.

      NEW Keyword Suggestions by Topic
      Moz Pro

      NEW Keyword Suggestions by Topic

      Learn more
    • Learn SEO
      • Beginner's Guide to SEO

        The #1 most popular introduction to SEO, trusted by millions.

      • SEO Learning Center

        Broaden your knowledge with SEO resources for all skill levels.

      • On-Demand Webinars

        Learn modern SEO best practices from industry experts.

      • How-To Guides

        Step-by-step guides to search success from the authority on SEO.

      • Moz Academy

        Upskill and get certified with on-demand courses & certifications.

      • MozCon

        Save on Early Bird tickets and join us in London or New York City

      Unlock flexible pricing & new endpoints
      Moz API

      Unlock flexible pricing & new endpoints

      Find your plan
    • Blog
    • Why Moz
      • Digital Marketers

        Simplify SEO tasks to save time and grow your traffic.

      • Small Business Solutions

        Uncover insights to make smarter marketing decisions in less time.

      • Agency Solutions

        Earn & keep valuable clients with unparalleled data & insights.

      • Enterprise Solutions

        Gain a competitive edge in the ever-changing world of search.

      • The Moz Story

        Moz was the first & remains the most trusted SEO company.

      • New Releases

        Get the scoop on the latest and greatest from Moz.

      Surface actionable competitive intel
      New Feature

      Surface actionable competitive intel

      Learn More
    • Log in
      • Moz Pro
      • Moz Local
      • Moz Local Dashboard
      • Moz API
      • Moz API Dashboard
      • Moz Academy
    • Avatar
      • Moz Home
      • Notifications
      • Account & Billing
      • Manage Users
      • Community Profile
      • My Q&A
      • My Videos
      • Log Out

    The Moz Q&A Forum

    • Forum
    • Questions
    • Users
    • Ask the Community

    Welcome to the Q&A Forum

    Browse the forum for helpful insights and fresh discussions about all things SEO.

    1. Home
    2. SEO Tactics
    3. Technical SEO
    4. Disallow wildcard match in Robots.txt

    Moz Q&A is closed.

    After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

    Disallow wildcard match in Robots.txt

    Technical SEO
    3
    3
    2127
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as question
    Log in to reply
    This topic has been deleted. Only users with question management privileges can see it.
    • AmandaBridge
      AmandaBridge last edited by

      This is in my robots.txt file, does anyone know what this is supposed to accomplish, it doesn't appear to be blocking URLs with question marks

      Disallow: /?crawler=1
      Disallow: /
      ?mobile=1

      Thank you

      1 Reply Last reply Reply Quote 0
      • effectdigital
        effectdigital @GastonRiera last edited by

        This is a good reply.

        Everyone gets really confused because Robots.txt has very minor, partial wildcard support and that makes people think that Robots.txt files use Regex, which they do not. Instead of having some weird half and half implementation, it would be much better IMO if the Robots.txt initiative / directive were updated to say "yes, you can use full regular expressions with regards to URL string matching".

        Many people are left in a kind of silly guessing game because Google doesn't 'properly' elaborate or invest in expanding the definitions to their currently (publicly) assumed end-game.

        People assume that if "*" will match any string of characters, "?" will match any individual character when used in a robots.txt file. This would make sense, but it's not the case. AFAIK there are only one or two supported wildcard characters in Robots.txt and that's why people get confused, looking for escape characters and the suchlike.

        1 Reply Last reply Reply Quote 0
        • GastonRiera
          Gaston Riera last edited by

          Hi Amanda,

          Those lines tell GoogleBot not to crawl urls that have that text fragments.
          For example, wont crawl: domain.com/category/product**?mobile=1**

          BUT, that doesnt mean that will not crawl every URL with question marks. For that, the line should be like this:
          Disallow: /*?

          I do highly recommend you to read this guides:
          About /robots.txt - Official site - Robotstxt.org
          Robots.txt - Moz
          Robots.txt: the ultimate guide - YOAST
          The Complete Guide to Robots.txt - PORTENT

          Hope it helps.
          Best luck.
          GR

          effectdigital 1 Reply Last reply Reply Quote 1
          • 1 / 1
          • First post
            Last post

          Got a burning SEO question?

          Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


          Start my free trial


          Browse Questions

          Explore more categories

          • Moz Tools

            Chat with the community about the Moz tools.

          • SEO Tactics

            Discuss the SEO process with fellow marketers

          • Community

            Discuss industry events, jobs, and news!

          • Digital Marketing

            Chat about tactics outside of SEO

          • Research & Trends

            Dive into research and trends in the search industry.

          • Support

            Connect on product support and feature requests.

          • See all categories

          Related Questions

          • LivDetrick

            Role of Robots.txt and Search Console parameters settings

            Hi, wondering if anyone can point me to resources or explain the difference between these two. If a site has url parameters disallowed in Robots.txt is it redundant to edit settings in Search Console parameters to anything other than "Let Googlebot Decide"?

            Technical SEO | | LivDetrick
            0
          • kcb8178

            Is there a limit to how many URLs you can put in a robots.txt file?

            We have a site that has way too many urls caused by our crawlable faceted navigation.  We are trying to purge 90% of our urls from the indexes.  We put no index tags on the url combinations that we do no want indexed anymore, but it is taking google way too long to find the no index tags.  Meanwhile we are getting hit with excessive url warnings and have been it by Panda. Would it help speed the process of purging urls if we added the urls to the robots.txt file?  Could this cause any issues for us?  Could it have the opposite effect and block the crawler from finding the urls, but not purge them from the index? The list could be in excess of 100MM urls.

            Technical SEO | | kcb8178
            0
          • Eurasmus.com

            Log in, sign up, user registration and robots

            Hi all, We have an accommodation site that asks users only to register when they want to book a room, in the last step. Though this is the ideal situation when you have tons of users, nowadays we are having around 1500 - 2000 per day and making tests we found out that if we ask for a registration (simple, 1 click FB) we mail them all and through a good customer service we are increasing our sales. That is why, we would like to ask users to register right after the home page ie Home/accommodation or and all the rest. I am not sure how can I make to make that content still visible to robots.
            Will the authentication process block google crawling it? Maybe something we can do? We are  not completely sure how to proceed so any tip would be appreciated. Thank you all for answering.

            Technical SEO | | Eurasmus.com
            3
          • RocketZando

            Empty Meta Robots Directive - Harmful?

            Hi, We had a coding update and a side-effect of that was that our directive was emptied, in other words it now reads as: on all of the site. I've since noticed that Google's cache date on all of the pages - at least, the ones I tested - have a Cached date of no later than 17 December '12 - that's the Monday after the directive was removed on mass. So, A, does anyone have solid evidence of an empty directive causing problems? Past experience, Matt Cutts, Fishkin quote, etc. And then B - It seems fairly well correlated but, does my entire site's homogenous Cached date point to this tag removal? Or is it fairly normal to have a particular cache date across a large site (we're a large ecommerce site). Our site: http://www.zando.co.za/ I'm having the directive reinstated as soon as Dev permitting. And then, for extra credit, is there a way with Google's API, or perhaps some other tool, to run an arbitrary list and retrieve Cached dates? I'd want to do this for diagnosis purposes and preferably in a way that OK with Google. I'd avoid CURLing for the cached URL and scraping out that dates with BASH, or any such kind of thing. Cheers,

            Technical SEO | | RocketZando
            0
          • irvingw

            Allow or Disallow First in Robots.txt

            If I want to override a Disallow directive in robots.txt with an Allow command, do I have the Allow command before or after the Disallow command? example: Allow: /models/ford///page* Disallow: /models////page

            Technical SEO | | irvingw
            0
          • AndreVanKets

            OK to block /js/ folder using robots.txt?

            I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
            http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
            http://www.discoverafrica.com/js/global.js?v=1.2
            http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
            http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?

            Technical SEO | | AndreVanKets
            0
          • WebsiteConsultants

            How do you disallow HTTPS?

            I currently have a site (startuploans.org) that runs everything as http, recently we decided to start an online application to process loan apps. Now, for one certain section we configured ssl to work (https://www.startuploans.org/secure/). If I go to the HTTPS url for any of my other pages they show up...I was going to just 301 everything from https but because it is in a subdirectiory I can't... Also, canonical URL's won't work either because it's a totally different system and the pages are generated in an odd manor. It's really just 1 page that needs to be disallowed.. Is there any way to disallow all HTTPS requests from robots.txt while keeping all the HTTP requests working as normal?

            Technical SEO | | WebsiteConsultants
            0
          • debi_zyx

            SEO Benefit from Redirecting New Exact Match Domains?

            Hi, All! This is a question asked in the old Q & A section, but the answer was a little ambiguous and it was about 3 years ago, so I decided to repost and let the knowledgeable SEO public answer... From David LaFerney: It’s clear that it’s much easier to get high rankings for a term if your domain is an exact match for the query. If you own several such domains that are very related such as – investmentrealestate.com, positivecashflow.com, and rentalproperty.com – would you be able to benefit from those by 301ing them to a single site, or would you have to maintain separate sites to help capture those targeted phrases? In a nutshell – SEO wise, is it worth owning multiple domains to exactly match valuable search phrases? Or do you lose the exact match benefit when you redirect?>> To clarify: redirecting an old domain with lots of history and links to a new exact match domain seems to contain SEO benefit.  (You get links+exact match domain, approximately.)  But the other way around? Redirecting a new exact match domain to an older domain with links?  Does that do anything for the ranking of the old domain for the exact match keyword?  Or absolutely nothing?  (My impression has been that it's nothing, but the question came up for a client and I just wanted to make sure I wasn't missing something.) Thanks in advance!

            Technical SEO | | debi_zyx
            0

          Get started with Moz Pro!

          Unlock the power of advanced SEO tools and data-driven insights.

          Start my free trial
          Products
          • Moz Pro
          • Moz Local
          • Moz API
          • Moz Data
          • STAT
          • Product Updates
          Moz Solutions
          • SMB Solutions
          • Agency Solutions
          • Enterprise Solutions
          • Digital Marketers
          Free SEO Tools
          • Domain Authority Checker
          • Link Explorer
          • Keyword Explorer
          • Competitive Research
          • Brand Authority Checker
          • Local Citation Checker
          • MozBar Extension
          • MozCast
          Resources
          • Blog
          • SEO Learning Center
          • Help Hub
          • Beginner's Guide to SEO
          • How-to Guides
          • Moz Academy
          • API Docs
          About Moz
          • About
          • Team
          • Careers
          • Contact
          Why Moz
          • Case Studies
          • Testimonials
          Get Involved
          • Become an Affiliate
          • MozCon
          • Webinars
          • Practical Marketer Series
          • MozPod
          Connect with us

          Contact the Help team

          Join our newsletter
          Moz logo
          © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
          • Accessibility
          • Terms of Use
          • Privacy

          Looks like your connection to Moz was lost, please wait while we try to reconnect.