undefined
Skip to content
Moz logo Menu open Menu close
  • Products
    • Moz Pro
    • Moz Pro Home
    • Moz Local
    • Moz Local Home
    • STAT
    • Moz API
    • Moz API Home
    • Compare SEO Products
    • Moz Data
  • Free SEO Tools
    • Domain Analysis
    • Keyword Explorer
    • Link Explorer
    • Competitive Research
    • MozBar
    • More Free SEO Tools
  • Learn SEO
    • Beginner's Guide to SEO
    • SEO Learning Center
    • Moz Academy
    • SEO Q&A
    • Webinars, Whitepapers, & Guides
  • Blog
  • Why Moz
    • Agency Solutions
    • Enterprise Solutions
    • Small Business Solutions
    • Case Studies
    • The Moz Story
    • New Releases
  • Log in
  • Log out
  • Products
    • Moz Pro

      Your all-in-one suite of SEO essentials.

    • Moz Local

      Raise your local SEO visibility with complete local SEO management.

    • STAT

      SERP tracking and analytics for enterprise SEO experts.

    • Moz API

      Power your SEO with our index of over 44 trillion links.

    • Compare SEO Products

      See which Moz SEO solution best meets your business needs.

    • Moz Data

      Power your SEO strategy & AI models with custom data solutions.

    NEW Keyword Suggestions by Topic
    Moz Pro

    NEW Keyword Suggestions by Topic

    Learn more
  • Free SEO Tools
    • Domain Analysis

      Get top competitive SEO metrics like DA, top pages and more.

    • Keyword Explorer

      Find traffic-driving keywords with our 1.25 billion+ keyword index.

    • Link Explorer

      Explore over 40 trillion links for powerful backlink data.

    • Competitive Research

      Uncover valuable insights on your organic search competitors.

    • MozBar

      See top SEO metrics for free as you browse the web.

    • More Free SEO Tools

      Explore all the free SEO tools Moz has to offer.

    NEW Keyword Suggestions by Topic
    Moz Pro

    NEW Keyword Suggestions by Topic

    Learn more
  • Learn SEO
    • Beginner's Guide to SEO

      The #1 most popular introduction to SEO, trusted by millions.

    • SEO Learning Center

      Broaden your knowledge with SEO resources for all skill levels.

    • On-Demand Webinars

      Learn modern SEO best practices from industry experts.

    • How-To Guides

      Step-by-step guides to search success from the authority on SEO.

    • Moz Academy

      Upskill and get certified with on-demand courses & certifications.

    • SEO Q&A

      Insights & discussions from an SEO community of 500,000+.

    Unlock flexible pricing & new endpoints
    Moz API

    Unlock flexible pricing & new endpoints

    Find your plan
  • Blog
  • Why Moz
    • Small Business Solutions

      Uncover insights to make smarter marketing decisions in less time.

    • Agency Solutions

      Earn & keep valuable clients with unparalleled data & insights.

    • Enterprise Solutions

      Gain a competitive edge in the ever-changing world of search.

    • The Moz Story

      Moz was the first & remains the most trusted SEO company.

    • Case Studies

      Explore how Moz drives ROI with a proven track record of success.

    • New Releases

      Get the scoop on the latest and greatest from Moz.

    Surface actionable competitive intel
    New Feature

    Surface actionable competitive intel

    Learn More
  • Log in
    • Moz Pro
    • Moz Local
    • Moz Local Dashboard
    • Moz API
    • Moz API Dashboard
    • Moz Academy
  • Avatar
    • Moz Home
    • Notifications
    • Account & Billing
    • Manage Users
    • Community Profile
    • My Q&A
    • My Videos
    • Log Out

The Moz Q&A Forum

  • Forum
  • Questions
  • Users
  • Ask the Community

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

  1. Home
  2. SEO Tactics
  3. Technical SEO
  4. Crawl solutions for landing pages that don't contain a robots.txt file?

Moz Q&A is closed.

After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

Crawl solutions for landing pages that don't contain a robots.txt file?

Technical SEO
3
10
1.7k
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as question
Log in to reply
This topic has been deleted. Only users with question management privileges can see it.
  • Nomader
    Nomader last edited by Apr 12, 2019, 8:25 PM

    My site (www.nomader.com) is currently built on Instapage, which does not offer the ability to add a robots.txt file. I plan to migrate to a Shopify site in the coming months, but for now the Instapage site is my primary website. In the interim, would you suggest that I manually request a Google crawl through the search console tool? If so, how often? Any other suggestions for countering this Meta Noindex issue?

    1 Reply Last reply Reply Quote 1
    • Nomader
      Nomader @BlueprintMarketing last edited by May 2, 2019, 3:41 AM May 2, 2019, 3:41 AM

      No problem Tom. Thanks for the additional info — that is helpful to know.

      1 Reply Last reply Reply Quote 1
      • BlueprintMarketing
        BlueprintMarketing @Nomader last edited by May 2, 2019, 1:10 AM May 2, 2019, 1:10 AM

        Bryan,

        I’m glad that you found what you where looking for.

        I must have missed the part about it being 100% Instapage when you said CMS I thought  meant something on else with instapage I think of it as landing pages not a CMS

        I want to help so you asked about Google search console how often you need to request  google index  your site.

        First make sure

        You should have 5 urls in Google search console

        your domain, http://www. , http:// , https://www. & https://

        • nomader.com
        • https://www.nomader.com
        • https://nomader.com
        • http;//www.nomader.com
        • http://nomader.com

        you should not have to  requests google index once you’re pages are in googles index. There is no time line to make you need to requests  google index.

        Use search consoles  index  system to see if you need to make a request  and look for notifications

        Times you should request google crawl when adding new unlinked pages , when making big changes to your site , whatever adding pages  with out a xml sitemap or fixing problems  / testing.

        I want to help so as you said you’re going to be using Shopify.

        Just before you go live  running on Shopify in the future you should make a xml sitemap of the Instapage site

        You can do it for free using https://www.screamingfrog.co.uk/seo-spider/

        you’re running now name it  /sitemap_ip.xml or /sitemap2.xml  upload it to Shopify

        & make sure it’s not the same name so it will work with your Shopify xml sitemap  /sitemap.xml

        submit the /sitemap._ip.xml to search console then add the Shopify /sitemap.xml

        You can run multiple xml sitemaps as long as they are not  overlapping

        just remember never add  non-200 page, 404s, 300sno flow , no index or redirects  to a xml sitemap  ScreamingFrog  will ask if you want to when you’re making the sitemap.

        Shopify will make its own xml sitemaps and and having the current site as a second xml sitemap will help to make sure your change to the site will not hurt the intipage par of  the Shopify site

        https://support.google.com/webmasters/answer/34592?hl=en

        know  adding a XML  Sitemap is a smart move

        I hope that was of help I’m so about miss what you meant.

        respectfully,

        Tom

        https://builtwith.com/relationships/nomader.com

        https://builtwith.com/redirects/nomader.com

        Nomader 1 Reply Last reply May 2, 2019, 3:41 AM Reply Quote 1
        • Nomader
          Nomader @seoelevated last edited by May 1, 2019, 8:39 PM May 1, 2019, 8:39 PM

          Thanks so much for your thoughtful, detailed response. That answers my question.

          1 Reply Last reply Reply Quote 0
          • seoelevated
            seoelevated Subscriber last edited by May 1, 2019, 8:37 PM May 1, 2019, 8:14 PM

            Bryan,

            If I understand your intent, you want your pages indexed. I see that your site has 5 pages indexed (/, /help, /influencers, /wholesale, /co-brand). And that you have some other pages (e.g. /donations), which are not indexed, but these have "noindex" tags explicitly in their HEAD sections.

            Not having a robots.txt file is equal to having a robots.txt file with a directive to allow crawling of all pages. This is per http://www.robotstxt.org/orig.html, where they say "The presence of an empty "/robots.txt" file has no explicit associated semantics, it will be treated as if it was not present, i.e. all robots will consider themselves welcome."

            So, if you have no robots.txt file, the search engine will feel free to crawl everything it discovers, and then whether or not it indexes those pages will be guided by presence or absence of NOINDEX tags in your HEAD sections. From a quick browse of your site and its indexed pages, this seems to be working properly.

            Note that I'm referencing a distinction between "crawling" and "indexing".  The robots.txt file provides directives for crawling (i.e. access discovered pages, and discovering pages linked to those). Whereas the meta robots tags in the head provide directives for indexing (i.e. including the discovered pages in search index and displaying those as results to searchers). And in this context, absence of a robots.txt file simply allows the search engine to crawl all of your content, discover all linked pages, and then rely on meta robots directives in those pages for any guidance on whether or not to index those pages it finds.

            As for a sitemap, while they are helpful for monitoring indexation, and also provide help to search engines to discover all desired pages, in your case it doesn't look especially necessary. Again, I only took a quick look, but it seems you have your key pages all linked from your home page, and you have meta directives in pages you wish to keep out of the index. And you have a very small number of pages. So, it looks like you are meeting your crawl and indexation desires.

            Nomader 1 Reply Last reply May 1, 2019, 8:39 PM Reply Quote 1
            • topic:timeago_earlier,15 days
            • Nomader
              Nomader @BlueprintMarketing last edited by Apr 17, 2019, 1:46 AM Apr 17, 2019, 1:46 AM

              Hi Tom,

              Unfortunately, Instapage is a proprietary CMS that does not currently support robots.txt or site maps. Instapage is primarily built for landing pages, and not actual websites so that's their reasoning for not adding SEO support for basics like robots.txt and site maps.

              Thanks anyway for your help.

              Best,

              -Bryan

              BlueprintMarketing 1 Reply Last reply May 2, 2019, 1:10 AM Reply Quote 0
              • BlueprintMarketing
                BlueprintMarketing @Nomader last edited by Apr 15, 2019, 1:07 PM Apr 15, 2019, 1:07 PM

                hi

                so I see the problem now

                https://www.nomader.com/robots.txt

                Does not have a robots.txt file upload it to the root of your server or  specific place where Developer and/or CMS / Hosting company recommends  I could not figure out what to type of CMS you’re useing  if you’re using one

                make a robots.txt file using

                http://tools.seobook.com/robots-txt/generator/

                https://www.internetmarketingninjas.com/seo-tools/robots-txt-generator/exportrobots.php

                https://moz.com/learn/seo/robotstxt

                It will look like this below.

                User-Agent: *
                Disallow:

                Sitemap: https://www.nomader.com/sitemap.xml

                it looks like you’re using Java  for your website?

                https://builtwith.com/detailed/nomader.com

                I am guessing  you’re not using a subdomain to host the Landing Pages?

                If you are using a subdomain you would have to create a robots.txt file for that but from everything I can see you’re using your regular domain. So you would simply create these files ( i’m in a car on a cell phone so I did quick to see  check if you have a  XML site map file but I do think you do

                https://www.nomader.com/sitemap.xml

                You can purchase a tool called Screaming Frog SEO spider  if your site is over 500 pages  you will need to pay for it it’s approximately $200 however you will be able to create a wonderful site map you can also create a XML site map by googling   xml sitemap  generators. However I would recommend Screaming Prod because you can separate the images and it’s a very good tool to have.

                Because you will need to generate a new site map whenever you update your site or add Landing Pages it will be done using screaming frog and uploaded to the same place in the server. Unless you can create a dynamic sitemap  using whatever website of the  infrastructure structure using.

                Here are the directions to add your site  Google Search Console /  Google Webmaster Tools

                https://support.google.com/webmasters/answer/34592?hl=en

                If you need any help with any of this please do not hesitate to ask I am more than happy to help  you can also generate a site map in the old version of Google Webmaster Tools / Google Search Console.

                Hope this helps,

                Tom

                Nomader 1 Reply Last reply Apr 17, 2019, 1:46 AM Reply Quote 1
                • Nomader
                  Nomader @BlueprintMarketing last edited by Apr 15, 2019, 3:12 AM Apr 15, 2019, 3:12 AM

                  Thanks for the reply Thomas. Where do you see that my site has the robots.txt file? As far as I can tell, it is missing. Instapage does not offer robots.txt as I mentioned in my post. Here's a community help page of theirs where this question was asked and answered: https://help.instapage.com/hc/en-us/community/posts/213622968-Sitemap-and-Robotx-txt

                  So in the absence of having a robots.txt file, I guess the only way to counter this is to manually request a fetch/index from Google console? How often do you recommend I do this?

                  BlueprintMarketing 1 Reply Last reply Apr 15, 2019, 1:07 PM Reply Quote 0
                  • BlueprintMarketing
                    BlueprintMarketing @BlueprintMarketing last edited by Apr 13, 2019, 3:18 AM Apr 13, 2019, 3:18 AM

                    You don’t need to worry about instapage & robot.txt your site has the robots.txt & instapage is not set to no index.

                    so yes use google search console to fetch / index the pages it’s very easy if you read the help information I posted below

                    https://help.instapage.com/hc/en-us#

                    hope that helps,

                    Tom

                    Nomader 1 Reply Last reply Apr 15, 2019, 3:12 AM Reply Quote 1
                    • BlueprintMarketing
                      BlueprintMarketing last edited by Apr 13, 2019, 2:49 AM Apr 13, 2019, 2:49 AM

                      If you cannot turn off  “Meta Noindex“ you cannot fix it with robots.txt I  suggest you contact the developer of the  Instapage  landing pages app. If it’s locked to no index as you said that is the only of for countering a pre coded  by the company Meta Noindex issue?

                      I will look into this for you I bet that you can change it but not via robots.txt. I

                      will update it in the morning for you.

                      All the best,

                      Tom

                      BlueprintMarketing 1 Reply Last reply Apr 13, 2019, 3:18 AM Reply Quote 1
                      • 1 / 1
                      1 out of 10
                      • First post
                        1/10
                        Last post

                      Got a burning SEO question?

                      Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


                      Start my free trial


                      Browse Questions

                      Explore more categories

                      • Moz Tools

                        Chat with the community about the Moz tools.

                      • SEO Tactics

                        Discuss the SEO process with fellow marketers

                      • Community

                        Discuss industry events, jobs, and news!

                      • Digital Marketing

                        Chat about tactics outside of SEO

                      • Research & Trends

                        Dive into research and trends in the search industry.

                      • Support

                        Connect on product support and feature requests.

                      • See all categories

                      Related Questions

                      • AmandaBridge

                        Disallow wildcard match in Robots.txt

                        This is in my robots.txt file, does anyone know what this is supposed to accomplish, it doesn't appear to be blocking URLs with question marks Disallow: /?crawler=1
                        Disallow: /?mobile=1 Thank you

                        Technical SEO | Aug 28, 2018, 9:50 AM | AmandaBridge
                        0
                      • LabeliumUSA

                        Robot.txt : How to block a specific file type in several subdirectories ?

                        Hello everyone ! I need help setting up a robot.txt. I'm trying to block all pdf files in particular directories so I'm using this command. In the example below the line is blocking all .gif in the entire site. Block files of a specific file type (for example, .gif) | Disallow: /*.gif$ 2 questions : Can I use this command to specify one particular directory in which I want to block pdf files ? Will this line be recognized by googlebots ? Disallow: /fileadmin/xxxxxxx/xxx/xxxxxxx/*.pdf$ Then I realized that I would have to write as many lines as many directories there are in which I want to block pdf files. Let's say I want to block pdf files in all these 3 directories /fileadmin/directory1 /fileadmin/directory1/sub1 /fileadmin/directory1/sub1/pdf Is there a pattern-matching rule I could use to blocks access to pdf files in all subdirectories instead of writing 3x the above line for each subdirectory ? For exemple : Disallow: /fileadmin/directory1*/ Many thanks in advance for any insight you may have.

                        Technical SEO | Nov 2, 2017, 8:20 PM | LabeliumUSA
                        0
                      • Kilgray

                        Robots txt. in page with 301 redirect

                        We currently have a a series of help pages that we would like to disallow from our robots txt. The thing is that these help pages are located in our old website, which now has a 301 redirect to current site. Which is the proper way to go around? 1- Add the pages we want to disallow to the robots.txt of the new website? 2- Break the redirect momentarily and add the pages to the robots.txt of the old one? Thanks

                        Technical SEO | Mar 7, 2016, 2:13 PM | Kilgray
                        0
                      • EcommerceSite

                        Do I need to block my cart page in robots.txt?

                        I have a site with woocommerce. Do I need to block the cart page?

                        Technical SEO | Mar 6, 2015, 5:36 PM | EcommerceSite
                        0
                      • reidsteven75

                        How Does Google's "index" find the location of pages in the "page directory" to return?

                        This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched.  These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory".  The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls.   Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.

                        Technical SEO | Jun 2, 2013, 12:00 PM | reidsteven75
                        0
                      • KCBackofen

                        Will an XML sitemap override a robots.txt

                        I have a client that has a robots.txt file that is blocking an entire subdomain, entirely by accident. Their original solution, not realizing the robots.txt error, was to submit an xml sitemap to get their pages indexed. I did not think this tactic would work, as the robots.txt would take precedent over the xmls sitemap. But it worked... I have no explanation as to how or why. Does anyone have an answer to this? or any experience with a website that has had a clear Disallow: / for months , that somehow has pages in the index?

                        Technical SEO | Apr 10, 2013, 8:50 PM | KCBackofen
                        0
                      • tylerfraser

                        Can I Disallow Faceted Nav URLs - Robots.txt

                        I have been disallowing /*? So I know that works without affecting crawling. I am wondering if I can disallow the faceted nav urls. So disallow: /category.html/? /category2.html/? /category3.html/*? To prevent the price faceted url from being cached: /category.html?price=1%2C1000
                        and
                        /category.html?price=1%2C1000&product_material=88 Thanks!

                        Technical SEO | Dec 24, 2011, 4:11 AM | tylerfraser
                        0
                      • themegroup

                        Robots.txt file getting a 500 error - is this a problem?

                        Hello all! While doing some routine health checks on a few of our client sites, I spotted that a new client of ours - who's website was not designed built by us - is returning a 500 internal server error when I try to look at the robots.txt file. As we don't host / maintain their site, I would have to go through their head office to get this changed, which isn't a problem but I just wanted to check whether this error will actually be having a negative effect on their site / whether there's a benefit to getting this changed? Thanks in advance!

                        Technical SEO | Sep 6, 2011, 8:08 AM | themegroup
                        0

                      Get started with Moz Pro!

                      Unlock the power of advanced SEO tools and data-driven insights.

                      Start my free trial
                      Products
                      • Moz Pro
                      • Moz Local
                      • Moz API
                      • Moz Data
                      • STAT
                      • Product Updates
                      Moz Solutions
                      • SMB Solutions
                      • Agency Solutions
                      • Enterprise Solutions
                      Free SEO Tools
                      • Domain Authority Checker
                      • Link Explorer
                      • Keyword Explorer
                      • Competitive Research
                      • Brand Authority Checker
                      • Local Citation Checker
                      • MozBar Extension
                      • MozCast
                      Resources
                      • Blog
                      • SEO Learning Center
                      • Help Hub
                      • Beginner's Guide to SEO
                      • How-to Guides
                      • Moz Academy
                      • API Docs
                      About Moz
                      • About
                      • Team
                      • Careers
                      • Contact
                      Why Moz
                      • Case Studies
                      • Testimonials
                      Get Involved
                      • Become an Affiliate
                      • MozCon
                      • Webinars
                      • Practical Marketer Series
                      • MozPod
                      Connect with us

                      Contact the Help team

                      Join our newsletter
                      Moz logo
                      © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
                      • Accessibility
                      • Terms of Use
                      • Privacy

                      Looks like your connection to Moz was lost, please wait while we try to reconnect.