undefined
Skip to content
Moz logo Menu open Menu close
  • Products
    • Moz Pro
    • Moz Pro Home
    • Moz Local
    • Moz Local Home
    • STAT
    • Moz API
    • Moz API Home
    • Compare SEO Products
    • Moz Data
  • Free SEO Tools
    • Domain Analysis
    • Keyword Explorer
    • Link Explorer
    • Competitive Research
    • MozBar
    • More Free SEO Tools
  • Learn SEO
    • Beginner's Guide to SEO
    • SEO Learning Center
    • Moz Academy
    • SEO Q&A
    • Webinars, Whitepapers, & Guides
  • Blog
  • Why Moz
    • Agency Solutions
    • Enterprise Solutions
    • Small Business Solutions
    • Case Studies
    • The Moz Story
    • New Releases
  • Log in
  • Log out
  • Products
    • Moz Pro

      Your all-in-one suite of SEO essentials.

    • Moz Local

      Raise your local SEO visibility with complete local SEO management.

    • STAT

      SERP tracking and analytics for enterprise SEO experts.

    • Moz API

      Power your SEO with our index of over 44 trillion links.

    • Compare SEO Products

      See which Moz SEO solution best meets your business needs.

    • Moz Data

      Power your SEO strategy & AI models with custom data solutions.

    NEW Keyword Suggestions by Topic
    Moz Pro

    NEW Keyword Suggestions by Topic

    Learn more
  • Free SEO Tools
    • Domain Analysis

      Get top competitive SEO metrics like DA, top pages and more.

    • Keyword Explorer

      Find traffic-driving keywords with our 1.25 billion+ keyword index.

    • Link Explorer

      Explore over 40 trillion links for powerful backlink data.

    • Competitive Research

      Uncover valuable insights on your organic search competitors.

    • MozBar

      See top SEO metrics for free as you browse the web.

    • More Free SEO Tools

      Explore all the free SEO tools Moz has to offer.

    NEW Keyword Suggestions by Topic
    Moz Pro

    NEW Keyword Suggestions by Topic

    Learn more
  • Learn SEO
    • Beginner's Guide to SEO

      The #1 most popular introduction to SEO, trusted by millions.

    • SEO Learning Center

      Broaden your knowledge with SEO resources for all skill levels.

    • On-Demand Webinars

      Learn modern SEO best practices from industry experts.

    • How-To Guides

      Step-by-step guides to search success from the authority on SEO.

    • Moz Academy

      Upskill and get certified with on-demand courses & certifications.

    • SEO Q&A

      Insights & discussions from an SEO community of 500,000+.

    Unlock flexible pricing & new endpoints
    Moz API

    Unlock flexible pricing & new endpoints

    Find your plan
  • Blog
  • Why Moz
    • Small Business Solutions

      Uncover insights to make smarter marketing decisions in less time.

    • Agency Solutions

      Earn & keep valuable clients with unparalleled data & insights.

    • Enterprise Solutions

      Gain a competitive edge in the ever-changing world of search.

    • The Moz Story

      Moz was the first & remains the most trusted SEO company.

    • Case Studies

      Explore how Moz drives ROI with a proven track record of success.

    • New Releases

      Get the scoop on the latest and greatest from Moz.

    Surface actionable competitive intel
    New Feature

    Surface actionable competitive intel

    Learn More
  • Log in
    • Moz Pro
    • Moz Local
    • Moz Local Dashboard
    • Moz API
    • Moz API Dashboard
    • Moz Academy
  • Avatar
    • Moz Home
    • Notifications
    • Account & Billing
    • Manage Users
    • Community Profile
    • My Q&A
    • My Videos
    • Log Out

The Moz Q&A Forum

  • Forum
  • Questions
  • Users
  • Ask the Community

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

  1. Home
  2. SEO Tactics
  3. On-Page Optimization
  4. How do you block development servers with robots.txt?

Moz Q&A is closed.

After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

How do you block development servers with robots.txt?

On-Page Optimization
6
7
6.7k
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as question
Log in to reply
This topic has been deleted. Only users with question management privileges can see it.
  • DisMedia
    DisMedia last edited by Apr 28, 2011, 6:58 PM

    When we create client websites the urls are client.oursite.com.  Google is indexing theses sites and attaching to our domain.  How can we stop it with robots.txt?  I've heard you need to have the robots file on both the main site and the dev sites... A code sample would be groovy.  Thanks, TR

    1 Reply Last reply Reply Quote 0
    • JustinTaylor88
      JustinTaylor88 last edited by May 11, 2012, 5:56 AM May 11, 2012, 5:56 AM

      Added X robots tag into our headers on our development sites.

      Just a note - if you use apache and have mod_pagespeed installed , it wall conflict and pagespeed will remove the X robots tag.

      Begin Bad Bot Blocking

      BrowserMatchNoCase Googlebot bad_bot
      BrowserMatchNoCase bingbot bad_bot
      BrowserMatchNoCase OmniExplorer_Bot/6.11.1 bad_bot
      BrowserMatchNoCase omniexplorer_bot bad_bot
      BrowserMatchNoCase Baiduspider bad_bot
      BrowserMatchNoCase Baiduspider/2.0 bad_bot
      BrowserMatchNoCase yandex bad_bot
      BrowserMatchNoCase yandeximages bad_bot
      BrowserMatchNoCase Spinn3r bad_bot
      BrowserMatchNoCase sogou bad_bot
      BrowserMatchNoCase Sogouwebspider/3.0 bad_bot
      BrowserMatchNoCase Sogouwebspider/4.0 bad_bot
      BrowserMatchNoCase sosospider+ bad_bot
      BrowserMatchNoCase jikespider bad_bot
      BrowserMatchNoCase ia_archiver bad_bot
      BrowserMatchNoCase PaperLiBot bad_bot
      BrowserMatchNoCase ahrefsbot bad_bot
      BrowserMatchNoCase ahrefsbot/1.0 bad_bot
      BrowserMatchNoCase SiteBot/0.1 bad_bot
      BrowserMatchNoCase DNS-Digger/1.0 bad_bot
      BrowserMatchNoCase DNS-Digger-Explorer/1.0 bad_bot
      BrowserMatchNoCase boardreader bad_bot
      BrowserMatchNoCase radian6 bad_bot
      BrowserMatchNoCase R6_FeedFetcher bad_bot
      BrowserMatchNoCase R6_CommentReader bad_bot
      BrowserMatchNoCase ScoutJet bad_bot
      BrowserMatchNoCase ezooms bad_bot
      BrowserMatchNoCase CC-rget/5.818 bad_bot
      BrowserMatchNoCase libwww-perl/5.813 bad_bot
      BrowserMatchNoCase magpie-crawler 1.1 bad_bot
      BrowserMatchNoCase jakarta bad_bot
      BrowserMatchNoCase discobot/1.0 bad_bot
      BrowserMatchNoCase MJ12bot bad_bot
      BrowserMatchNoCase MJ12bot/v1.2.0 bad_bot
      BrowserMatchNoCase MJ12bot/v1.2.5 bad_bot
      BrowserMatchNoCase SemrushBot/0.9 bad_bot
      BrowserMatchNoCase MLBot bad_bot
      BrowserMatchNoCase butterfly bad_bot
      BrowserMatchNoCase SeznamBot/3.0 bad_bot
      BrowserMatchNoCase HuaweiSymantecSpider bad_bot
      BrowserMatchNoCase Exabot/2.0 bad_bot
      BrowserMatchNoCase netseer/0.1 bad_bot
      BrowserMatchNoCase NetSeer crawler/2.0 bad_bot
      BrowserMatchNoCase NetSeer/Nutch-0.9 bad_bot
      BrowserMatchNoCase psbot/0.1 bad_bot
      BrowserMatchNoCase Moreoverbot/x.00 bad_bot
      BrowserMatchNoCase moreoverbot/5.0 bad_bot
      BrowserMatchNoCase Jakarta Commons-HttpClient/3.0 bad_bot
      BrowserMatchNoCase SocialSpider-Finder/0.2 bad_bot
      BrowserMatchNoCase MaxPointCrawler/Nutch-1.1 bad_bot
      BrowserMatchNoCase willow bad_bot
      Order Deny,Allow
      Deny from env=bad_bot

      End Bad Bot Blocking

      Header set X-Robots-Tag "noindex, nofollow"

      Begin Bad Bot Blocking

      BrowserMatchNoCase Googlebot bad_bot
      BrowserMatchNoCase bingbot bad_bot
      BrowserMatchNoCase OmniExplorer_Bot/6.11.1 bad_bot
      BrowserMatchNoCase omniexplorer_bot bad_bot
      BrowserMatchNoCase Baiduspider bad_bot
      BrowserMatchNoCase Baiduspider/2.0 bad_bot
      BrowserMatchNoCase yandex bad_bot
      BrowserMatchNoCase yandeximages bad_bot
      BrowserMatchNoCase Spinn3r bad_bot
      BrowserMatchNoCase sogou bad_bot
      BrowserMatchNoCase Sogouwebspider/3.0 bad_bot
      BrowserMatchNoCase Sogouwebspider/4.0 bad_bot
      BrowserMatchNoCase sosospider+ bad_bot
      BrowserMatchNoCase jikespider bad_bot
      BrowserMatchNoCase ia_archiver bad_bot
      BrowserMatchNoCase PaperLiBot bad_bot
      BrowserMatchNoCase ahrefsbot bad_bot
      BrowserMatchNoCase ahrefsbot/1.0 bad_bot
      BrowserMatchNoCase SiteBot/0.1 bad_bot
      BrowserMatchNoCase DNS-Digger/1.0 bad_bot
      BrowserMatchNoCase DNS-Digger-Explorer/1.0 bad_bot
      BrowserMatchNoCase boardreader bad_bot
      BrowserMatchNoCase radian6 bad_bot
      BrowserMatchNoCase R6_FeedFetcher bad_bot
      BrowserMatchNoCase R6_CommentReader bad_bot
      BrowserMatchNoCase ScoutJet bad_bot
      BrowserMatchNoCase ezooms bad_bot
      BrowserMatchNoCase CC-rget/5.818 bad_bot
      BrowserMatchNoCase libwww-perl/5.813 bad_bot
      BrowserMatchNoCase magpie-crawler 1.1 bad_bot
      BrowserMatchNoCase jakarta bad_bot
      BrowserMatchNoCase discobot/1.0 bad_bot
      BrowserMatchNoCase MJ12bot bad_bot
      BrowserMatchNoCase MJ12bot/v1.2.0 bad_bot
      BrowserMatchNoCase MJ12bot/v1.2.5 bad_bot
      BrowserMatchNoCase SemrushBot/0.9 bad_bot
      BrowserMatchNoCase MLBot bad_bot
      BrowserMatchNoCase butterfly bad_bot
      BrowserMatchNoCase SeznamBot/3.0 bad_bot
      BrowserMatchNoCase HuaweiSymantecSpider bad_bot
      BrowserMatchNoCase Exabot/2.0 bad_bot
      BrowserMatchNoCase netseer/0.1 bad_bot
      BrowserMatchNoCase NetSeer crawler/2.0 bad_bot
      BrowserMatchNoCase NetSeer/Nutch-0.9 bad_bot
      BrowserMatchNoCase psbot/0.1 bad_bot
      BrowserMatchNoCase Moreoverbot/x.00 bad_bot
      BrowserMatchNoCase moreoverbot/5.0 bad_bot
      BrowserMatchNoCase Jakarta Commons-HttpClient/3.0 bad_bot
      BrowserMatchNoCase SocialSpider-Finder/0.2 bad_bot
      BrowserMatchNoCase MaxPointCrawler/Nutch-1.1 bad_bot
      BrowserMatchNoCase willow bad_bot
      Order Deny,Allow
      Deny from env=bad_bot

      End Bad Bot Blocking

      Header set X-Robots-Tag "noindex, nofollow"

      1 Reply Last reply Reply Quote 0
      • topic:timeago_earlier,about a year
      • KeriMorgret
        KeriMorgret last edited by Apr 29, 2011, 12:48 AM Apr 29, 2011, 12:48 AM

        On the root of the development subdomain, use the following robots.txt content to block all robots.

        User-agent: *
        Disallow: /

        Next, verify the subdomain in Google Webmaster Tools as its own site, and request that that site be removed from the index.

        For added protection:

        • Make the robots.txt on the live site read only, so when you copy the dev site over you don't accidentally copy over the robots.txt saying to exclude everything
        • Set up a code monitor on the robots.txt for both the dev site and the live site that checks the content of those files and alerts you if there are changes. I use https://polepositionweb.com/roi/codemonitor/index.php.
        1 Reply Last reply Reply Quote 2
        • roimediaworks
          roimediaworks last edited by Apr 29, 2011, 12:47 AM Apr 29, 2011, 12:47 AM

          Like Daniel said you can use robots.txt to block spiders, but this won't guarantee exclusion of URLs showing up in search results. You could use x-robots-tag in the server headers. Generate a 403 every time user-agent hits the sub domain.

          1 Reply Last reply Reply Quote 1
          • abtain
            abtain last edited by Apr 28, 2011, 8:42 PM Apr 28, 2011, 8:42 PM

            I put a .htaccess style password on the development site. If you make a robots.txt to block the site, make sure you don't accidentally put that on the production site.

            1 Reply Last reply Reply Quote 1
            • DisMedia
              DisMedia @SteveOllington last edited by Apr 28, 2011, 8:20 PM Apr 28, 2011, 8:20 PM

              Unfortunately I don't have that option.

              1 Reply Last reply Reply Quote 0
              • SteveOllington
                SteveOllington last edited by Apr 28, 2011, 7:47 PM Apr 28, 2011, 7:47 PM

                Just use a directory instead of a sub-domain and then block that directory... that's the easiest way.

                DisMedia 1 Reply Last reply Apr 28, 2011, 8:20 PM Reply Quote 0
                • 1 / 1
                1 out of 7
                • First post
                  1/7
                  Last post

                Got a burning SEO question?

                Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


                Start my free trial


                Browse Questions

                Explore more categories

                • Moz Tools

                  Chat with the community about the Moz tools.

                • SEO Tactics

                  Discuss the SEO process with fellow marketers

                • Community

                  Discuss industry events, jobs, and news!

                • Digital Marketing

                  Chat about tactics outside of SEO

                • Research & Trends

                  Dive into research and trends in the search industry.

                • Support

                  Connect on product support and feature requests.

                • See all categories

                Related Questions

                • AslanBarselinov

                  Correct robots.txt for WordPress

                  Hi. So I recently launched a website on WordPress (1 main page and 5 internal pages). The main page got indexed right off the bat, while other pages seem to be blocked by robots.txt. Would you please look at my robots file and tell me what‘s wrong? I wanted to block the contact page, plugin elements, users’ comments (I got a discussion space on every page of my website) and website search section (to prevent duplicate pages from appearing in google search results). Looks like one of the lines is blocking every page after ”/“ from indexing, even though everything seems right. Thank you so much. FzSQkqB.jpg

                  On-Page Optimization | Jun 24, 2019, 1:36 AM | AslanBarselinov
                  1
                • Exa

                  Meta Robots index & noindex Both Implemented on Website

                  I don't want few of the pages of website to get indexed by Google, thus I have implemented meta robots noindex code on those specific pages. Due to some complications I am not able to remove meta robots index from header of every page Now, on specific pages I have both codes 'index & noindex' implemented. Question is: Will Google crawl/index pages which have noindex code along with index code? Thanks!

                  On-Page Optimization | Oct 19, 2015, 4:35 AM | Exa
                  0
                • JesusD

                  Blocking Subdomain from Google Crawl and Index

                  Hey everybody, how is it going? I have a simple question, that i need answered. I have a main domain, lets call it domain.com. Recently our company will launch a series of promotions for which we will use cname subdomains, i.e try.domain.com, or buy.domain.com. They will serve a commercial objective, nothing more. What is the best way to block such domains from being indexed in Google, also from counting as a subdomain from the domain.com. Robots.txt, No-follow, etc? Hope to hear from you, Best Regards,

                  On-Page Optimization | Jul 28, 2015, 8:54 PM | JesusD
                  3
                • neenor

                  How to exclude URL filter searches in robots.txt

                  When I look through my MOZ reports I can see it's included 'pages' which it shouldn't have included i.e. adding filtering rules such as this one http://www.mydomain.com/brands?color=364&manufacturer=505 How can I exclude all of these filters in the robots.txt? I think it'll be: Disallow: /*?color=$ Is that the correct syntax with the $ sign in it? Thanks!

                  On-Page Optimization | Jun 20, 2017, 5:00 PM | neenor
                  0
                • yeagerd

                  What is the right schema.org link for a web design / developer / mobile agency?

                  It seems strange that a group of web developers would make up an entire structured language to designate businesses by category and somehow forget  to include companies like.... web developers. So I must be missing it, what is correct to use?

                  On-Page Optimization | May 14, 2018, 2:40 PM | yeagerd
                  0
                • Kotkov

                  Right way to block google robots from ppc landing pages

                  What is the right way to completely block seo robots from my adword landing pages? Robots.txt does not work really good for that, as far I know. Adding metatags noindex nofollow on the other side will block adwords robot as well. right? Thank you very much, Serge

                  On-Page Optimization | Mar 2, 2012, 8:58 PM | Kotkov
                  0
                • Lobtec

                  Best practice for Meta-Robots tag in categories and author pages?

                  For some of our site we use Wordpress, which we really like working with. The question I have is for the categories and authors pages (and similiar pages), i.e. the one looking: http://www.domain.com/authors/. Should you or should you not use follow, noindex for meta-robots? We have a lot of categories/tags/authors which generates a lot of pages. I'm a bit worried that google won't like this and leaning towards adding the follow, noindex. But the more I read about it, the more I see people disagree. What does the community of Seomoz think?

                  On-Page Optimization | Jan 23, 2012, 1:52 PM | Lobtec
                  0
                • nordicnetproducts

                  How do we handle sitemaps in robots.txt when multiple domains point to same physical location?

                  we have www.mysite.net, www.mysite.se, www.mysite.fi and so on. all of these domains point to the same physical location on our webserver, and we replace texts given back to client depending on which domain he/she requested. My problem is this: How do i configure sitemaps in robots.txt when robots.txt is used by multiple domains? If I for instance put the rows Sitemap: http://www.mysite.net/sitemapNet.xml
                  Sitemap: http://www.mysite.net/sitemapSe.xml in robots.txt, would that result in some cross submission error?

                  On-Page Optimization | Aug 4, 2011, 7:05 AM | nordicnetproducts
                  0

                Get started with Moz Pro!

                Unlock the power of advanced SEO tools and data-driven insights.

                Start my free trial
                Products
                • Moz Pro
                • Moz Local
                • Moz API
                • Moz Data
                • STAT
                • Product Updates
                Moz Solutions
                • SMB Solutions
                • Agency Solutions
                • Enterprise Solutions
                Free SEO Tools
                • Domain Authority Checker
                • Link Explorer
                • Keyword Explorer
                • Competitive Research
                • Brand Authority Checker
                • Local Citation Checker
                • MozBar Extension
                • MozCast
                Resources
                • Blog
                • SEO Learning Center
                • Help Hub
                • Beginner's Guide to SEO
                • How-to Guides
                • Moz Academy
                • API Docs
                About Moz
                • About
                • Team
                • Careers
                • Contact
                Why Moz
                • Case Studies
                • Testimonials
                Get Involved
                • Become an Affiliate
                • MozCon
                • Webinars
                • Practical Marketer Series
                • MozPod
                Connect with us

                Contact the Help team

                Join our newsletter
                Moz logo
                © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
                • Accessibility
                • Terms of Use
                • Privacy

                Looks like your connection to Moz was lost, please wait while we try to reconnect.