Skip to content
    Moz logo Menu open Menu close
    • Products
      • Moz Pro
      • Moz Pro Home
      • Moz Local
      • Moz Local Home
      • STAT
      • Moz API
      • Moz API Home
      • Compare SEO Products
      • Moz Data
    • Free SEO Tools
      • Domain Analysis
      • Keyword Explorer
      • Link Explorer
      • Competitive Research
      • MozBar
      • More Free SEO Tools
    • Learn SEO
      • Beginner's Guide to SEO
      • SEO Learning Center
      • Moz Academy
      • MozCon
      • Webinars, Whitepapers, & Guides
    • Blog
    • Why Moz
      • Digital Marketers
      • Agency Solutions
      • Enterprise Solutions
      • Small Business Solutions
      • The Moz Story
      • New Releases
    • Log in
    • Log out
    • Products
      • Moz Pro

        Your all-in-one suite of SEO essentials.

      • Moz Local

        Raise your local SEO visibility with complete local SEO management.

      • STAT

        SERP tracking and analytics for enterprise SEO experts.

      • Moz API

        Power your SEO with our index of over 44 trillion links.

      • Compare SEO Products

        See which Moz SEO solution best meets your business needs.

      • Moz Data

        Power your SEO strategy & AI models with custom data solutions.

      Let your reputation grow with Reviews AI
      Moz Local

      Let your reputation grow with Reviews AI

      Learn more
    • Free SEO Tools
      • Domain Analysis

        Get top competitive SEO metrics like DA, top pages and more.

      • Keyword Explorer

        Find traffic-driving keywords with our 1.25 billion+ keyword index.

      • Link Explorer

        Explore over 40 trillion links for powerful backlink data.

      • Competitive Research

        Uncover valuable insights on your organic search competitors.

      • MozBar

        See top SEO metrics for free as you browse the web.

      • More Free SEO Tools

        Explore all the free SEO tools Moz has to offer.

      NEW Keyword Suggestions by Topic
      Moz Pro

      NEW Keyword Suggestions by Topic

      Learn more
    • Learn SEO
      • Beginner's Guide to SEO

        The #1 most popular introduction to SEO, trusted by millions.

      • SEO Learning Center

        Broaden your knowledge with SEO resources for all skill levels.

      • On-Demand Webinars

        Learn modern SEO best practices from industry experts.

      • How-To Guides

        Step-by-step guides to search success from the authority on SEO.

      • Moz Academy

        Upskill and get certified with on-demand courses & certifications.

      • MozCon

        Save on Early Bird tickets and join us in London or New York City

      Unlock flexible pricing & new endpoints
      Moz API

      Unlock flexible pricing & new endpoints

      Find your plan
    • Blog
    • Why Moz
      • Digital Marketers

        Simplify SEO tasks to save time and grow your traffic.

      • Small Business Solutions

        Uncover insights to make smarter marketing decisions in less time.

      • Agency Solutions

        Earn & keep valuable clients with unparalleled data & insights.

      • Enterprise Solutions

        Gain a competitive edge in the ever-changing world of search.

      • The Moz Story

        Moz was the first & remains the most trusted SEO company.

      • New Releases

        Get the scoop on the latest and greatest from Moz.

      Surface actionable competitive intel
      New Feature

      Surface actionable competitive intel

      Learn More
    • Log in
      • Moz Pro
      • Moz Local
      • Moz Local Dashboard
      • Moz API
      • Moz API Dashboard
      • Moz Academy
    • Avatar
      • Moz Home
      • Notifications
      • Account & Billing
      • Manage Users
      • Community Profile
      • My Q&A
      • My Videos
      • Log Out

    The Moz Q&A Forum

    • Forum
    • Questions
    • Users
    • Ask the Community

    Welcome to the Q&A Forum

    Browse the forum for helpful insights and fresh discussions about all things SEO.

    1. Home
    2. SEO Tactics
    3. Technical SEO
    4. Multiple robots.txt files on server

    Moz Q&A is closed.

    After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

    Multiple robots.txt files on server

    Technical SEO
    5
    7
    3732
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as question
    Log in to reply
    This topic has been deleted. Only users with question management privileges can see it.
    • mjukhud
      mjukhud last edited by

      Hi!

      I have previously hired a developer to put up my site and noticed afterwards that he did not know much about SEO. This lead me to starting to learn myself and applying some changes step by step.

      One of the things I am currently doing is inserting sitemap reference in robots.txt file (which was not there before). But just now when I wanted to upload the file via FTP to my server I found multiple ones - in different sizes - and I dont know what to do with them? Can I remove them? I have downloaded and opened them and they seem to be 2 textfiles and 2 dupplicates. Names:

      robots.txt (original dupplicate)
      robots.txt-Original (original)
      robots.txt-NEW (other content)
      robots.txt-Working (other content dupplicate)

      Would really appreciate help and expertise suggestions. Thanks!

      1 Reply Last reply Reply Quote 0
      • peakdistrictseo
        peakdistrictseo last edited by

        So what's the best policy if a site uses an e-commerce platform like Magento, which has a robots file, but also has a Wordpress blog installed to another folder. eg: /blog and uses a plugin like YOAST which generated a robots file of the Wordpress installation.

        Then you have 2 robots files, is this detrimental or no big deal?

        1 Reply Last reply Reply Quote 0
        • mjukhud
          mjukhud @seoman10 last edited by

          Thanks very much for the help!

          1 Reply Last reply Reply Quote 0
          • mjukhud
            mjukhud last edited by

            Thanks very much for the help!

            1 Reply Last reply Reply Quote 0
            • seoman10
              seoman10 last edited by

              Keep a backup and remove them.

              Search engines are only going to look at the file which is exactly called robots.txt variations of file name will be ignored.

              Do make sure the entries are correct in the main one though, you don't want Google crawling admin pages or other confidential areas of the site.

              mjukhud 1 Reply Last reply Reply Quote 1
              • mjukhud
                mjukhud @Mustansar last edited by

                Hi, thanks for the answer and help!

                Well, I only have one domain that has a webpage and no subdomains active (no blog-subdomain or similar) - so how can I configure that to the situation? Can I just remove all and upload the one I want, maybe?

                1 Reply Last reply Reply Quote 0
                • Mustansar
                  Mustansar last edited by

                  That's a good question, EMS.  The robots.txt protocol can get kind of 
                  confusing when you think about it too long, and it sounds like you've 
                  thought about this a bit.  However, in this case, it might help to 
                  look at robots.txt from the perspective of the spider.

                  When a spider finds a URL, it takes the whole domain name (everything 
                  between 'http://' and the next '/'), then sticks a '/robots.txt' on 
                  the end of it and looks for that file.  If that file exists, then the 
                  spider should read it to see where it is allowed to crawl.

                  In your case, Googlebot, or any other spider, should try to access 
                  three URLs: domainA.com/robots.txt, domainB.domainA.com/robots.txt, 
                  and domainB.com/robots.txt.  The rules in each are treated as 
                  separate, so disallowing robots from domainA.com/ should result in 
                  domainA.com/ being removed from search results while 
                  domainB.domainA.com/ remains unaffected, which does not sound like not 
                  something you want.

                  The problem you might have with the setup you have described is this-- 
                  in order to keep domainB.domainA.com out of the results, you would 
                  need to have domainB.domainA.com/robots.txt exclude robots, while 
                  domainB.com/robots.txt welcomes them.  This means that you would need 
                  to have a way to make domainB.domainA.com/ and domainB.com/ serve 
                  different information, and judging from what you've described, you 
                  have not set up your server to do so yet.

                  Of course, it is always possible that I have assumed to much about 
                  your situation, so it is a good idea to use Google's robots.txt 
                  analysis tool (see http://www.google.com/support/webmasters/bin/topic.py?topic=8475
                  ) to see if your robots.txt files already produce the results you 
                  want.

                  If using robots.txt files doesn't solve the problem, and assuming that 
                  you want to continue hosting all of your content on domainA.com, one 
                  strategy you really should look into would be setting up a 301 
                  redirect from the pages on domainB.domainA.com/ to domainB.com/ .  If 
                  you need more advice on how to do this with your server software, your 
                  hosting company's tech support would definitely be the best place to 
                  start, but this group is here to help if more isues arise. 🙂

                  Hope that helps!

                  mjukhud 1 Reply Last reply Reply Quote 0
                  • 1 / 1
                  • First post
                    Last post

                  Got a burning SEO question?

                  Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


                  Start my free trial


                  Browse Questions

                  Explore more categories

                  • Moz Tools

                    Chat with the community about the Moz tools.

                  • SEO Tactics

                    Discuss the SEO process with fellow marketers

                  • Community

                    Discuss industry events, jobs, and news!

                  • Digital Marketing

                    Chat about tactics outside of SEO

                  • Research & Trends

                    Dive into research and trends in the search industry.

                  • Support

                    Connect on product support and feature requests.

                  • See all categories

                  Related Questions

                  • LabeliumUSA

                    Robot.txt : How to block a specific file type in several subdirectories ?

                    Hello everyone ! I need help setting up a robot.txt. I'm trying to block all pdf files in particular directories so I'm using this command. In the example below the line is blocking all .gif in the entire site. Block files of a specific file type (for example, .gif) | Disallow: /*.gif$ 2 questions : Can I use this command to specify one particular directory in which I want to block pdf files ? Will this line be recognized by googlebots ? Disallow: /fileadmin/xxxxxxx/xxx/xxxxxxx/*.pdf$ Then I realized that I would have to write as many lines as many directories there are in which I want to block pdf files. Let's say I want to block pdf files in all these 3 directories /fileadmin/directory1 /fileadmin/directory1/sub1 /fileadmin/directory1/sub1/pdf Is there a pattern-matching rule I could use to blocks access to pdf files in all subdirectories instead of writing 3x the above line for each subdirectory ? For exemple : Disallow: /fileadmin/directory1*/ Many thanks in advance for any insight you may have.

                    Technical SEO | | LabeliumUSA
                    0
                  • Eurasmus.com

                    Log in, sign up, user registration and robots

                    Hi all, We have an accommodation site that asks users only to register when they want to book a room, in the last step. Though this is the ideal situation when you have tons of users, nowadays we are having around 1500 - 2000 per day and making tests we found out that if we ask for a registration (simple, 1 click FB) we mail them all and through a good customer service we are increasing our sales. That is why, we would like to ask users to register right after the home page ie Home/accommodation or and all the rest. I am not sure how can I make to make that content still visible to robots.
                    Will the authentication process block google crawling it? Maybe something we can do? We are  not completely sure how to proceed so any tip would be appreciated. Thank you all for answering.

                    Technical SEO | | Eurasmus.com
                    3
                  • MickEdwards

                    Adding multi-language sitemaps to robots.txt

                    I am working on a revamped multi-language site that has moved to Magento.  Each language runs off the core coding so there are no sub-directories per language. The developer has created sitemaps which have been uploaded to their respective GWT accounts.  They have placed the sitemaps in new directories such as: /sitemap/uk/sitemap.xml /sitemap/de/sitemap.xml I want to add the sitemaps to the robots.txt but can't figure out how to do it.  Also should they have placed the sitemaps in a single location with the file identifying each language: /sitemap/uk-sitemap.xml /sitemap/de-sitemap.xml What is the cleanest way of handling these sitemaps and can/should I get them on robots.txt?

                    Technical SEO | | MickEdwards
                    0
                  • columbiaseo

                    Server is taking too long to respond - What does this mean?

                    A client has 3 sites that he would like for me to look at.  Whenever I attempt to on my home internet I get this message: The connection has timed out
                    The server is taking too long to respond. When I take my iphone off wifi and use AT&T, the site comes up fine.  What is going on here?

                    Technical SEO | | columbiaseo
                    0
                  • BistosAmerica

                    Oh no googlebot can not access my robots.txt file

                    I just receive a n error message from google webmaster Wonder it was something to do with Yoast plugin. Could somebody help me with troubleshooting this? Here's original message Over the last 24 hours, Googlebot encountered 189 errors while attempting to access your robots.txt. To ensure that we didn't crawl any pages listed in that file, we postponed our crawl. Your site's overall robots.txt error rate is 100.0%. Recommended action If the site error rate is 100%: Using a web browser, attempt to access http://www.soobumimphotography.com//robots.txt. If you are able to access it from your browser, then your site may be configured to deny access to googlebot. Check the configuration of your firewall and site to ensure that you are not denying access to googlebot. If your robots.txt is a static page, verify that your web service has proper permissions to access the file. If your robots.txt is dynamically generated, verify that the scripts that generate the robots.txt are properly configured and have permission to run. Check the logs for your website to see if your scripts are failing, and if so attempt to diagnose the cause of the failure. If the site error rate is less than 100%: Using Webmaster Tools, find a day with a high error rate and examine the logs for your web server for that day. Look for errors accessing robots.txt in the logs for that day and fix the causes of those errors. The most likely explanation is that your site is overloaded. Contact your hosting provider and discuss reconfiguring your web server or adding more resources to your website. After you think you've fixed the problem, use Fetch as Google to fetch http://www.soobumimphotography.com//robots.txt to verify that Googlebot can properly access your site.

                    Technical SEO | | BistosAmerica
                    0
                  • HMK-NL

                    No indexing url including query string with Robots txt

                    Dear all, how can I block url/pages with query strings like page.html?dir=asc&order=name with robots txt? Thanks!

                    Technical SEO | | HMK-NL
                    0
                  • Nightwing

                    Does Bing ignore robots txt files?

                    Bonjour from "Its a miracle is not raining" Wetherby Uk 🙂 Ok here goes... Why despite a robots text file excluding indexing to site http://lewispr.netconstruct-preview.co.uk/ is the site url being indexed in Bing bit not Google? Does bing ignore robots text files or is there something missing from http://lewispr.netconstruct-preview.co.uk/robots.txt I need to add to stop bing indexing a preview site as illustrated below. http://i216.photobucket.com/albums/cc53/zymurgy_bucket/preview-bing-indexed.jpg Any insights welcome 🙂

                    Technical SEO | | Nightwing
                    0
                  • neooptic

                    How to extract URLs from a site (without bringing the server down!)

                    Hi everybody. One of my clients is migrating to a new ecommerce platform, and we need to get a list of urls from the existing site to start mapping out the 301 redirects. Usually, I'd use a tool like Xenu or Integrity to crawl and output a list. However, the database and server setup is so bad that it can't handle the requests from these tools and it sends the site down. This, unsurprisingly, is one of the reasons for the migration. Does anybody know of a way to get a full list of urls without having to make a bunch of http requests which will kill the site? Any advice would be much appreciated!

                    Technical SEO | | neooptic
                    0

                  Get started with Moz Pro!

                  Unlock the power of advanced SEO tools and data-driven insights.

                  Start my free trial
                  Products
                  • Moz Pro
                  • Moz Local
                  • Moz API
                  • Moz Data
                  • STAT
                  • Product Updates
                  Moz Solutions
                  • SMB Solutions
                  • Agency Solutions
                  • Enterprise Solutions
                  • Digital Marketers
                  Free SEO Tools
                  • Domain Authority Checker
                  • Link Explorer
                  • Keyword Explorer
                  • Competitive Research
                  • Brand Authority Checker
                  • Local Citation Checker
                  • MozBar Extension
                  • MozCast
                  Resources
                  • Blog
                  • SEO Learning Center
                  • Help Hub
                  • Beginner's Guide to SEO
                  • How-to Guides
                  • Moz Academy
                  • API Docs
                  About Moz
                  • About
                  • Team
                  • Careers
                  • Contact
                  Why Moz
                  • Case Studies
                  • Testimonials
                  Get Involved
                  • Become an Affiliate
                  • MozCon
                  • Webinars
                  • Practical Marketer Series
                  • MozPod
                  Connect with us

                  Contact the Help team

                  Join our newsletter
                  Moz logo
                  © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
                  • Accessibility
                  • Terms of Use
                  • Privacy

                  Looks like your connection to Moz was lost, please wait while we try to reconnect.