Skip to content
    Moz logo Menu open Menu close
    • Products
      • Moz Pro
      • Moz Pro Home
      • Moz Local
      • Moz Local Home
      • STAT
      • Moz API
      • Moz API Home
      • Compare SEO Products
      • Moz Data
    • Free SEO Tools
      • Domain Analysis
      • Keyword Explorer
      • Link Explorer
      • Competitive Research
      • MozBar
      • More Free SEO Tools
    • Learn SEO
      • Beginner's Guide to SEO
      • SEO Learning Center
      • Moz Academy
      • MozCon
      • Webinars, Whitepapers, & Guides
    • Blog
    • Why Moz
      • Digital Marketers
      • Agency Solutions
      • Enterprise Solutions
      • Small Business Solutions
      • The Moz Story
      • New Releases
    • Log in
    • Log out
    • Products
      • Moz Pro

        Your all-in-one suite of SEO essentials.

      • Moz Local

        Raise your local SEO visibility with complete local SEO management.

      • STAT

        SERP tracking and analytics for enterprise SEO experts.

      • Moz API

        Power your SEO with our index of over 44 trillion links.

      • Compare SEO Products

        See which Moz SEO solution best meets your business needs.

      • Moz Data

        Power your SEO strategy & AI models with custom data solutions.

      NEW Keyword Suggestions by Topic
      Moz Pro

      NEW Keyword Suggestions by Topic

      Learn more
    • Free SEO Tools
      • Domain Analysis

        Get top competitive SEO metrics like DA, top pages and more.

      • Keyword Explorer

        Find traffic-driving keywords with our 1.25 billion+ keyword index.

      • Link Explorer

        Explore over 40 trillion links for powerful backlink data.

      • Competitive Research

        Uncover valuable insights on your organic search competitors.

      • MozBar

        See top SEO metrics for free as you browse the web.

      • More Free SEO Tools

        Explore all the free SEO tools Moz has to offer.

      NEW Keyword Suggestions by Topic
      Moz Pro

      NEW Keyword Suggestions by Topic

      Learn more
    • Learn SEO
      • Beginner's Guide to SEO

        The #1 most popular introduction to SEO, trusted by millions.

      • SEO Learning Center

        Broaden your knowledge with SEO resources for all skill levels.

      • On-Demand Webinars

        Learn modern SEO best practices from industry experts.

      • How-To Guides

        Step-by-step guides to search success from the authority on SEO.

      • Moz Academy

        Upskill and get certified with on-demand courses & certifications.

      • MozCon

        Save on Early Bird tickets and join us in London or New York City

      Unlock flexible pricing & new endpoints
      Moz API

      Unlock flexible pricing & new endpoints

      Find your plan
    • Blog
    • Why Moz
      • Digital Marketers

        Simplify SEO tasks to save time and grow your traffic.

      • Small Business Solutions

        Uncover insights to make smarter marketing decisions in less time.

      • Agency Solutions

        Earn & keep valuable clients with unparalleled data & insights.

      • Enterprise Solutions

        Gain a competitive edge in the ever-changing world of search.

      • The Moz Story

        Moz was the first & remains the most trusted SEO company.

      • New Releases

        Get the scoop on the latest and greatest from Moz.

      Surface actionable competitive intel
      New Feature

      Surface actionable competitive intel

      Learn More
    • Log in
      • Moz Pro
      • Moz Local
      • Moz Local Dashboard
      • Moz API
      • Moz API Dashboard
      • Moz Academy
    • Avatar
      • Moz Home
      • Notifications
      • Account & Billing
      • Manage Users
      • Community Profile
      • My Q&A
      • My Videos
      • Log Out

    The Moz Q&A Forum

    • Forum
    • Questions
    • Users
    • Ask the Community

    Welcome to the Q&A Forum

    Browse the forum for helpful insights and fresh discussions about all things SEO.

    1. Home
    2. SEO Tactics
    3. Technical SEO
    4. Crawl solutions for landing pages that don't contain a robots.txt file?

    Moz Q&A is closed.

    After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

    Crawl solutions for landing pages that don't contain a robots.txt file?

    Technical SEO
    3
    10
    1655
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as question
    Log in to reply
    This topic has been deleted. Only users with question management privileges can see it.
    • Nomader
      Nomader last edited by

      My site (www.nomader.com) is currently built on Instapage, which does not offer the ability to add a robots.txt file. I plan to migrate to a Shopify site in the coming months, but for now the Instapage site is my primary website. In the interim, would you suggest that I manually request a Google crawl through the search console tool? If so, how often? Any other suggestions for countering this Meta Noindex issue?

      1 Reply Last reply Reply Quote 1
      • Nomader
        Nomader @BlueprintMarketing last edited by

        No problem Tom. Thanks for the additional info — that is helpful to know.

        1 Reply Last reply Reply Quote 1
        • BlueprintMarketing
          BlueprintMarketing @Nomader last edited by

          Bryan,

          I’m glad that you found what you where looking for.

          I must have missed the part about it being 100% Instapage when you said CMS I thought  meant something on else with instapage I think of it as landing pages not a CMS

          I want to help so you asked about Google search console how often you need to request  google index  your site.

          First make sure

          You should have 5 urls in Google search console

          your domain, http://www. , http:// , https://www. & https://

          • nomader.com
          • https://www.nomader.com
          • https://nomader.com
          • http;//www.nomader.com
          • http://nomader.com

          you should not have to  requests google index once you’re pages are in googles index. There is no time line to make you need to requests  google index.

          Use search consoles  index  system to see if you need to make a request  and look for notifications

          Times you should request google crawl when adding new unlinked pages , when making big changes to your site , whatever adding pages  with out a xml sitemap or fixing problems  / testing.

          I want to help so as you said you’re going to be using Shopify.

          Just before you go live  running on Shopify in the future you should make a xml sitemap of the Instapage site

          You can do it for free using https://www.screamingfrog.co.uk/seo-spider/

          you’re running now name it  /sitemap_ip.xml or /sitemap2.xml  upload it to Shopify

          & make sure it’s not the same name so it will work with your Shopify xml sitemap  /sitemap.xml

          submit the /sitemap._ip.xml to search console then add the Shopify /sitemap.xml

          You can run multiple xml sitemaps as long as they are not  overlapping

          just remember never add  non-200 page, 404s, 300sno flow , no index or redirects  to a xml sitemap  ScreamingFrog  will ask if you want to when you’re making the sitemap.

          Shopify will make its own xml sitemaps and and having the current site as a second xml sitemap will help to make sure your change to the site will not hurt the intipage par of  the Shopify site

          https://support.google.com/webmasters/answer/34592?hl=en

          know  adding a XML  Sitemap is a smart move

          I hope that was of help I’m so about miss what you meant.

          respectfully,

          Tom

          https://builtwith.com/relationships/nomader.com

          https://builtwith.com/redirects/nomader.com

          Nomader 1 Reply Last reply Reply Quote 1
          • Nomader
            Nomader @seoelevated last edited by

            Thanks so much for your thoughtful, detailed response. That answers my question.

            1 Reply Last reply Reply Quote 0
            • seoelevated
              seoelevated Subscriber last edited by

              Bryan,

              If I understand your intent, you want your pages indexed. I see that your site has 5 pages indexed (/, /help, /influencers, /wholesale, /co-brand). And that you have some other pages (e.g. /donations), which are not indexed, but these have "noindex" tags explicitly in their HEAD sections.

              Not having a robots.txt file is equal to having a robots.txt file with a directive to allow crawling of all pages. This is per http://www.robotstxt.org/orig.html, where they say "The presence of an empty "/robots.txt" file has no explicit associated semantics, it will be treated as if it was not present, i.e. all robots will consider themselves welcome."

              So, if you have no robots.txt file, the search engine will feel free to crawl everything it discovers, and then whether or not it indexes those pages will be guided by presence or absence of NOINDEX tags in your HEAD sections. From a quick browse of your site and its indexed pages, this seems to be working properly.

              Note that I'm referencing a distinction between "crawling" and "indexing".  The robots.txt file provides directives for crawling (i.e. access discovered pages, and discovering pages linked to those). Whereas the meta robots tags in the head provide directives for indexing (i.e. including the discovered pages in search index and displaying those as results to searchers). And in this context, absence of a robots.txt file simply allows the search engine to crawl all of your content, discover all linked pages, and then rely on meta robots directives in those pages for any guidance on whether or not to index those pages it finds.

              As for a sitemap, while they are helpful for monitoring indexation, and also provide help to search engines to discover all desired pages, in your case it doesn't look especially necessary. Again, I only took a quick look, but it seems you have your key pages all linked from your home page, and you have meta directives in pages you wish to keep out of the index. And you have a very small number of pages. So, it looks like you are meeting your crawl and indexation desires.

              Nomader 1 Reply Last reply Reply Quote 1
              • Nomader
                Nomader @BlueprintMarketing last edited by

                Hi Tom,

                Unfortunately, Instapage is a proprietary CMS that does not currently support robots.txt or site maps. Instapage is primarily built for landing pages, and not actual websites so that's their reasoning for not adding SEO support for basics like robots.txt and site maps.

                Thanks anyway for your help.

                Best,

                -Bryan

                BlueprintMarketing 1 Reply Last reply Reply Quote 0
                • BlueprintMarketing
                  BlueprintMarketing @Nomader last edited by

                  hi

                  so I see the problem now

                  https://www.nomader.com/robots.txt

                  Does not have a robots.txt file upload it to the root of your server or  specific place where Developer and/or CMS / Hosting company recommends  I could not figure out what to type of CMS you’re useing  if you’re using one

                  make a robots.txt file using

                  http://tools.seobook.com/robots-txt/generator/

                  https://www.internetmarketingninjas.com/seo-tools/robots-txt-generator/exportrobots.php

                  https://moz.com/learn/seo/robotstxt

                  It will look like this below.

                  User-Agent: *
                  Disallow:

                  Sitemap: https://www.nomader.com/sitemap.xml

                  it looks like you’re using Java  for your website?

                  https://builtwith.com/detailed/nomader.com

                  I am guessing  you’re not using a subdomain to host the Landing Pages?

                  If you are using a subdomain you would have to create a robots.txt file for that but from everything I can see you’re using your regular domain. So you would simply create these files ( i’m in a car on a cell phone so I did quick to see  check if you have a  XML site map file but I do think you do

                  https://www.nomader.com/sitemap.xml

                  You can purchase a tool called Screaming Frog SEO spider  if your site is over 500 pages  you will need to pay for it it’s approximately $200 however you will be able to create a wonderful site map you can also create a XML site map by googling   xml sitemap  generators. However I would recommend Screaming Prod because you can separate the images and it’s a very good tool to have.

                  Because you will need to generate a new site map whenever you update your site or add Landing Pages it will be done using screaming frog and uploaded to the same place in the server. Unless you can create a dynamic sitemap  using whatever website of the  infrastructure structure using.

                  Here are the directions to add your site  Google Search Console /  Google Webmaster Tools

                  https://support.google.com/webmasters/answer/34592?hl=en

                  If you need any help with any of this please do not hesitate to ask I am more than happy to help  you can also generate a site map in the old version of Google Webmaster Tools / Google Search Console.

                  Hope this helps,

                  Tom

                  Nomader 1 Reply Last reply Reply Quote 1
                  • Nomader
                    Nomader @BlueprintMarketing last edited by

                    Thanks for the reply Thomas. Where do you see that my site has the robots.txt file? As far as I can tell, it is missing. Instapage does not offer robots.txt as I mentioned in my post. Here's a community help page of theirs where this question was asked and answered: https://help.instapage.com/hc/en-us/community/posts/213622968-Sitemap-and-Robotx-txt

                    So in the absence of having a robots.txt file, I guess the only way to counter this is to manually request a fetch/index from Google console? How often do you recommend I do this?

                    BlueprintMarketing 1 Reply Last reply Reply Quote 0
                    • BlueprintMarketing
                      BlueprintMarketing @BlueprintMarketing last edited by

                      You don’t need to worry about instapage & robot.txt your site has the robots.txt & instapage is not set to no index.

                      so yes use google search console to fetch / index the pages it’s very easy if you read the help information I posted below

                      https://help.instapage.com/hc/en-us#

                      hope that helps,

                      Tom

                      Nomader 1 Reply Last reply Reply Quote 1
                      • BlueprintMarketing
                        BlueprintMarketing last edited by

                        If you cannot turn off  “Meta Noindex“ you cannot fix it with robots.txt I  suggest you contact the developer of the  Instapage  landing pages app. If it’s locked to no index as you said that is the only of for countering a pre coded  by the company Meta Noindex issue?

                        I will look into this for you I bet that you can change it but not via robots.txt. I

                        will update it in the morning for you.

                        All the best,

                        Tom

                        BlueprintMarketing 1 Reply Last reply Reply Quote 1
                        • 1 / 1
                        • First post
                          Last post

                        Got a burning SEO question?

                        Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


                        Start my free trial


                        Browse Questions

                        Explore more categories

                        • Moz Tools

                          Chat with the community about the Moz tools.

                        • SEO Tactics

                          Discuss the SEO process with fellow marketers

                        • Community

                          Discuss industry events, jobs, and news!

                        • Digital Marketing

                          Chat about tactics outside of SEO

                        • Research & Trends

                          Dive into research and trends in the search industry.

                        • Support

                          Connect on product support and feature requests.

                        • See all categories

                        Related Questions

                        • d.bird

                          Google has deindexed a page it thinks is set to 'noindex', but is in fact still set to 'index'

                          A page on our WordPress powered website has had an error message thrown up in GSC to say it is included in the sitemap but set to 'noindex'. The page has also been removed from Google's search results. Page is https://www.onlinemortgageadvisor.co.uk/bad-credit-mortgages/how-to-get-a-mortgage-with-bad-credit/ Looking at the page code, plus using Screaming Frog and Ahrefs crawlers, the page is very clearly still set to 'index'. The SEO plugin we use has not been changed to 'noindex' the page. I have asked for it to be reindexed via GSC but I'm concerned why Google thinks this page was asked to be noindexed. Can anyone help with this one? Has anyone seen this before, been hit with this recently, got any advice...?

                          Technical SEO | | d.bird
                          0
                        • btreloar

                          Robots.txt Syntax for Dynamic URLs

                          I want to Disallow certain dynamic pages in robots.txt and am unsure of the proper syntax. The pages I want to disallow all include the string ?Page= Which is the proper syntax?
                          Disallow: ?Page=
                          Disallow: ?Page=*
                          Disallow: ?Page=
                          Or something else?

                          Technical SEO | | btreloar
                          0
                        • scott315

                          How to find temporary redirects of existing site you don't control?

                          I am getting ready to move a clients site from another company. They have like 35 tempory redirects according to MOZ. Question is, how can I find out then current redirects so I can update everything for the new site? Do I need access to the current htaccess file to do this?

                          Technical SEO | | scott315
                          0
                        • imaginex

                          Should I block Map pages with robots.txt?

                          Hello, I have a website that was started in 1999. On the website I have map pages for each of the offices listed on my site, for which there are about 120. Each of the 120 maps is in a whole separate html page. There is no content in the page other than the map. I know all of the offices love having the map pages so I don't want to remove the pages. So, my question is would these pages with no real content be hurting the rankings of the other pages on our site? Therefore, should I block the pages with my robots.txt? Would I also have to remove these pages (in webmaster tools?) from Google for blocking by robots.txt to really work? I appreciate your feedback, thanks!

                          Technical SEO | | imaginex
                          0
                        • Mark_Ginsberg

                          Blocking Affiliate Links via robots.txt

                          Hi, I work with a client who has a large affiliate network pointing to their domain which is a large part of their inbound marketing strategy. All of these links point to a subdomain of affiliates.example.com, which then redirects the links through a 301 redirect to the relevant target page for the link. These links have been showing up in Webmaster Tools as top linking domains and also in the latest downloaded links reports. To follow guidelines and ensure that these links aren't counted by Google for either positive or negative impact on the site, we have added a block on the robots.txt of the affiliates.example.com subdomain, blocking search engines from crawling the full subddomain. The robots.txt file is the following code: User-agent: * Disallow: / We have authenticated the subdomain with Google Webmaster Tools and made certain that Google can reach and read the robots.txt file. We know they are being blocked from reading the affiliates subdomain. However, we added this affiliates subdomain block a few weeks ago to the robots.txt, but links are still showing up in the latest downloads report as first being discovered after we added the block. It's been a few weeks already, and we want to make sure that the block was implemented properly and that these links aren't being used to negatively impact the site. Any suggestions or clarification would be helpful - if the subdomain is being blocked for the search engines, why are the search engines following the links and reporting them in the www.example.com subdomain GWMT account as latest links. And if the block is implemented properly, will the total number of links pointing to our site  as reported in the links to your site section be reduced, or does this not have an impact on that figure?From a development standpoint, it's a much easier fix for us to adjust the robots.txt file than to change the affiliate linking connection from a 301 to a 302, which is why we decided to go with this option.Any help you can offer will be greatly appreciated.Thanks,Mark

                          Technical SEO | | Mark_Ginsberg
                          0
                        • zeepartner

                          Block Domain in robots.txt

                          Hi. We had some URLs that were indexed in Google from a www1-subdomain. We have now disabled the URLs (returning a 404 - for other reasons we cannot do a redirect from www1 to www) and blocked via robots.txt. But the amount of indexed pages keeps increasing (for 2 weeks now). Unfortunately, I cannot install Webmaster Tools for this subdomain to tell Google to back off... Any ideas why this could be and whether it's normal? I can send you more domain infos by personal message if you want to have a look at it.

                          Technical SEO | | zeepartner
                          0
                        • MRCSearch

                          Robots.txt Sitemap with Relative Path

                          Hi Everyone, In robots.txt, can the sitemap be indicated with a relative path? I'm trying to roll out a robots file to ~200 websites, and they all have the same relative path for a sitemap but each is hosted on its own domain. Basically I'm trying to avoid needing to create 200 different robots.txt files just to change the domain. If I do need to do that, though, is there an easier way than just trudging through it?

                          Technical SEO | | MRCSearch
                          0
                        • seoug_2005

                          Robots.txt and canonical tag

                          In the SEOmoz post - http://www.seomoz.org/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts, it's being said - If you have a robots.txt disallow in place for a page, the canonical tag will never be seen. Does it so happen that if a page is disallowed by robots.txt, spiders DO NOT read the html code ?

                          Technical SEO | | seoug_2005
                          0

                        Get started with Moz Pro!

                        Unlock the power of advanced SEO tools and data-driven insights.

                        Start my free trial
                        Products
                        • Moz Pro
                        • Moz Local
                        • Moz API
                        • Moz Data
                        • STAT
                        • Product Updates
                        Moz Solutions
                        • SMB Solutions
                        • Agency Solutions
                        • Enterprise Solutions
                        Free SEO Tools
                        • Domain Authority Checker
                        • Link Explorer
                        • Keyword Explorer
                        • Competitive Research
                        • Brand Authority Checker
                        • Local Citation Checker
                        • MozBar Extension
                        • MozCast
                        Resources
                        • Blog
                        • SEO Learning Center
                        • Help Hub
                        • Beginner's Guide to SEO
                        • How-to Guides
                        • Moz Academy
                        • API Docs
                        About Moz
                        • About
                        • Team
                        • Careers
                        • Contact
                        Why Moz
                        • Case Studies
                        • Testimonials
                        Get Involved
                        • Become an Affiliate
                        • MozCon
                        • Webinars
                        • Practical Marketer Series
                        • MozPod
                        Connect with us

                        Contact the Help team

                        Join our newsletter
                        Moz logo
                        © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
                        • Accessibility
                        • Terms of Use
                        • Privacy

                        Looks like your connection to Moz was lost, please wait while we try to reconnect.