undefined
Skip to content
Moz logo Menu open Menu close
  • Products
    • Moz Pro
    • Moz Pro Home
    • Moz Local
    • Moz Local Home
    • STAT
    • Moz API
    • Moz API Home
    • Compare SEO Products
    • Moz Data
  • Free SEO Tools
    • Domain Analysis
    • Keyword Explorer
    • Link Explorer
    • Competitive Research
    • MozBar
    • More Free SEO Tools
  • Learn SEO
    • Beginner's Guide to SEO
    • SEO Learning Center
    • Moz Academy
    • MozCon
    • Webinars, Whitepapers, & Guides
  • Blog
  • Why Moz
    • Digital Marketers
    • Agency Solutions
    • Enterprise Solutions
    • Small Business Solutions
    • The Moz Story
    • New Releases
  • Log in
  • Log out
  • Products
    • Moz Pro

      Your all-in-one suite of SEO essentials.

    • Moz Local

      Raise your local SEO visibility with complete local SEO management.

    • STAT

      SERP tracking and analytics for enterprise SEO experts.

    • Moz API

      Power your SEO with our index of over 44 trillion links.

    • Compare SEO Products

      See which Moz SEO solution best meets your business needs.

    • Moz Data

      Power your SEO strategy & AI models with custom data solutions.

    Track AI Overviews in Keyword Research
    Moz Pro

    Track AI Overviews in Keyword Research

    Try it free!
  • Free SEO Tools
    • Domain Analysis

      Get top competitive SEO metrics like DA, top pages and more.

    • Keyword Explorer

      Find traffic-driving keywords with our 1.25 billion+ keyword index.

    • Link Explorer

      Explore over 40 trillion links for powerful backlink data.

    • Competitive Research

      Uncover valuable insights on your organic search competitors.

    • MozBar

      See top SEO metrics for free as you browse the web.

    • More Free SEO Tools

      Explore all the free SEO tools Moz has to offer.

    NEW Keyword Suggestions by Topic
    Moz Pro

    NEW Keyword Suggestions by Topic

    Learn more
  • Learn SEO
    • Beginner's Guide to SEO

      The #1 most popular introduction to SEO, trusted by millions.

    • SEO Learning Center

      Broaden your knowledge with SEO resources for all skill levels.

    • On-Demand Webinars

      Learn modern SEO best practices from industry experts.

    • How-To Guides

      Step-by-step guides to search success from the authority on SEO.

    • Moz Academy

      Upskill and get certified with on-demand courses & certifications.

    • MozCon

      Save on Early Bird tickets and join us in London or New York City

    Unlock flexible pricing & new endpoints
    Moz API

    Unlock flexible pricing & new endpoints

    Find your plan
  • Blog
  • Why Moz
    • Digital Marketers

      Simplify SEO tasks to save time and grow your traffic.

    • Small Business Solutions

      Uncover insights to make smarter marketing decisions in less time.

    • Agency Solutions

      Earn & keep valuable clients with unparalleled data & insights.

    • Enterprise Solutions

      Gain a competitive edge in the ever-changing world of search.

    • The Moz Story

      Moz was the first & remains the most trusted SEO company.

    • New Releases

      Get the scoop on the latest and greatest from Moz.

    Surface actionable competitive intel
    New Feature

    Surface actionable competitive intel

    Learn More
  • Log in
    • Moz Pro
    • Moz Local
    • Moz Local Dashboard
    • Moz API
    • Moz API Dashboard
    • Moz Academy
  • Avatar
    • Moz Home
    • Notifications
    • Account & Billing
    • Manage Users
    • Community Profile
    • My Q&A
    • My Videos
    • Log Out

The Moz Q&A Forum

  • Forum
  • Questions
  • Users
  • Ask the Community

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

  1. Home
  2. Moz Tools
  3. Moz Pro
  4. What to do with a site of >50,000 pages vs. crawl limit?

Moz Q&A is closed.

After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.

What to do with a site of >50,000 pages vs. crawl limit?

Moz Pro
3
5
2.2k
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as question
Log in to reply
This topic has been deleted. Only users with question management privileges can see it.
  • scienceisrad
    scienceisrad Subscriber last edited by Jul 1, 2015, 4:52 PM

    What happens if you have a site in your Moz Pro campaign that has more than 50,000 pages?

    Would it be better to choose a sub-folder of the site to get a thorough look at that sub-folder?

    I have a few different large government websites that I'm tracking to see how they are fairing in rankings and SEO.  They are not my own websites.  I want to see how these agencies are doing compared to what the public searches for on technical topics and social issues that the agencies manage.  I'm an academic looking at science communication.  I am in the process of re-setting up my campaigns to get better data than I have been getting -- I am a newbie to SEO and the campaigns I slapped together a few months ago need to be set up better, such as all on the same day, making sure I've set it to include www or not for what ranks, refining my keywords, etc.

    I am stumped on what to do about the agency websites being really huge, and what all the options are to get good data in light of the 50,000 page crawl limit.  Here is an example of what I mean:

    To see how EPA is doing in searches related to air quality, ideally I'd track all of EPA's web presence.

    www.epa.gov has 560,000 pages -- if I put in www.epa.gov for a campaign, what happens with the site having so many more pages than the 50,000 crawl limit?  What do I miss out on?  Can I "trust" what I get?

    www.epa.gov/air has only 1450 pages, so if I choose this for what I track in a campaign, the crawl will cover that subfolder completely, and I am getting a complete picture of this air-focused sub-folder ... but (1) I'll miss out on air-related pages in other sub-folders of www.epa.gov, and (2) it seems like I have so much of the 50,000-page crawl limit that I'm not using and could be using.  (However, maybe that's not quite true - I'd also be tracking other sites as competitors - e.g. non-profits that advocate in air quality, industry air quality sites - and maybe those competitors count towards the 50,000-page crawl limit and would get me up to the limit? How do the competitors you choose figure into the crawl limit?)

    Any opinions on which I should do in general on this kind of situation?  The small sub-folder vs. the full humongous site vs. is there some other way to go here that I'm not thinking of?

    1 Reply Last reply Reply Quote 0
    • scienceisrad
      scienceisrad Subscriber last edited by Jul 22, 2015, 4:26 PM Jul 22, 2015, 4:26 PM

      Hi Sean -- Can you clarify for me how competitors in a campaign figure in to the 50,000 page limit?  Does the main page in the campaign get thoroughly crawled first and then competitors are crawled up to the limit?

      Some examples:

      If the main site is 100 pages, and I pick 2 competitors that are 100 to 1000 pages and a 3rd gargantuan competitor of 300,000 pages, what happens?  Does it matter in what order I enter competitors in this situation as to whether the 100-page and 1000-page competitors get crawled vs. whether the limit maxes out on the 300K competitor before crawling the smaller competitors?

      If the main site is 300,000 pages, do any competitors in the campaign just not get crawled at all because the 50,000 limit gets all used up on the  main site?

      What if the main site is 20,000 pages and a competitor is 45,000 pages?  Thorough crawl of main site and then partial crawl of competitor?

      I feel like I have a direction to go in based on our previous discussion for the main site in the campaign, but now I'm still a little stumped and confused about how competitors operate within the crawl limit.

      1 Reply Last reply Reply Quote 0
      • topic:timeago_earlier,15 days
      • Sean_Peerenboom
        Sean_Peerenboom last edited by Jul 7, 2015, 1:31 PM Jul 7, 2015, 1:31 PM

        Hi There,

        Thanks for writing us and this is a tricky one because it is difficult to say if there is an objectively right answer. 😞 In this case your best bet would be to create a sub folder that is under the standard subscription campaign limit and attempting to pick up what you miss using the other research tools. Although, our research tools are predominantly designed for one off interactions, you could probably use them to capture information that is a bit outside of the campaigns purview. Here is a link to our research tools for your reference: moz.com/researchtools/ose/

        If you do decide to enter a website that far surpasses the crawl limits then, what will be cut off is determined by the existing site structure. 😞 The way that our crawler works is that it will go from the link provided and use the existing link structure to keep crawling the site or until we run into a dead end.

        Both approaches may present issues so it will be more of a judgement call. One thing that I will say is that we have a much easier time crawling fewer pages so that may be something to keep in mind.

        Hope this helps and if you have any questions for me please let me know.

        Have a fantastic day!

        1 Reply Last reply Reply Quote 0
        • scienceisrad
          scienceisrad Subscriber last edited by Jul 2, 2015, 12:14 PM Jul 2, 2015, 12:14 PM

          Thanks Patrick for the tip about ScreamingFrog!  I checked out the link you shared, and it looks like a powerful tool.  I'm going to put it on my list of additional tools I need to get going on using.

          In the meantime, though, I still need a strategy for what to do in Moz.  Any opinions on whether I should set my Moz campaigns to the smaller sub-folders of a few thousand pages vs. the humongous full sites of 100,000+ pages?  I guess I'm leaning towards setting them to the smaller sub-folders.  Or maybe I should do a small sub-folder for one of the huge sites and do the full site for another campaign, and see what kind of results I get.

          1 Reply Last reply Reply Quote 0
          • PatrickDelehanty
            PatrickDelehanty last edited by Jul 1, 2015, 5:32 PM Jul 1, 2015, 5:31 PM

            Hi there

            I would look into ScreamingFrog - you can crawl 500 URIs for free, otherwise, if you have a license, you can crawl as many pages as you'd like.

            Let me know if this helps! Good luck!

            1 Reply Last reply Reply Quote 2
            • 1 / 1
            1 out of 5
            • First post
              1/5
              Last post

            Got a burning SEO question?

            Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


            Start my free trial


            Browse Questions

            Explore more categories

            • Moz Tools

              Chat with the community about the Moz tools.

            • SEO Tactics

              Discuss the SEO process with fellow marketers

            • Community

              Discuss industry events, jobs, and news!

            • Digital Marketing

              Chat about tactics outside of SEO

            • Research & Trends

              Dive into research and trends in the search industry.

            • Support

              Connect on product support and feature requests.

            • See all categories

            Related Questions

            • gokimedia

              Pages with Duplicate Content Error

              duplicate content crawl errors seo audit

              Hello, the result of renewed content appeared in the scan results in my Shopify Store. But these products are unique. Why am I getting this error? Can anyone please help to explain why? screenshot-analytics.moz.com-2021.10.28-19_53_09.png

              Moz Pro | Apr 15, 2024, 8:32 AM | gokimedia
              0
            • threecounties

              WEbsite cannot be crawled

              I have received the following message from MOZ on a few of our websites now Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. I have spoken with our webmaster and they have advised the below: The Robots.txt file is definitely there on all pages and Google is able to crawl for these files. Moz however is having some difficulty with finding the files when there is a particular redirect in place. For example, the page currently redirects from threecounties.co.uk/ to https://www.threecounties.co.uk/ and when this happens, the Moz crawler cannot find the robots.txt on the first URL and this generates the reports you have been receiving. From what I understand, this is a flaw with the Moz software and not something that we could fix form our end. _Going forward, something we could do is remove these rewrite rules to www., but these are useful redirects and removing them would likely have SEO implications. _ Has anyone else had this issue and is there anything we can do to rectify, or should we leave as is?

              Moz Pro | Sep 18, 2018, 11:45 AM | threecounties
              0
            • ChrissyOck

              How to deal with auto generated pages on our site that are considered thin content

              Hi there, Wondering how to deal w/ about 300+ pages on our site that are autogenerated & considered thin content. Here is an example of those pages: https://app.cobalt.io/ninp0 The pages are auto generated when a new security researcher joins our team & then filled by each researcher with specifics about their personal experience. Additionally, there is a fair amount of dynamic content on these pages that updates with certain activities. These pages are also getting marked as not having a canonical tag on them, however, they are technically different pages just w/ very similar elements. I'm not sure I would want to put a canonical tag on them as some of them have a decent page authority & I think could be contributing to our overall SEO health. Any ideas on how I should deal w/ this group of similar but not identical pages?

              Moz Pro | Apr 12, 2024, 11:09 AM | ChrissyOck
              0
            • NichGunn

              Should I set blog category/tag pages as "noindex"? If so, how do I prevent "meta noindex" Moz crawl errors for those pages?

              From what I can tell, SEO experts recommend setting blog category and tag pages (ie. "http://site.com/blog/tag/some-product") as "noindex, follow" in order to keep the page quality of indexable pages high. However, I just received a slew of critical crawl warnings from Moz for having these pages set to "noindex." Should the pages be indexed? If not, why am I receiving critical crawl warnings from Moz and how do I prevent this?

              Moz Pro | Nov 22, 2017, 11:13 AM | NichGunn
              0
            • Blacktie

              Block Moz (or any other robot) from crawling pages with specific URLs

              Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
              Disallow: /*numberOfStars=0 User-agent: rogerbot
              Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!

              Moz Pro | Jul 21, 2015, 11:43 AM | Blacktie
              0
            • catalinmoraru

              Problem crawling a website with age verification page.

              Hy every1, Need your help very urgent. I need to crawl a website that first has a page where you need to put your age for verification and after that you are redirected to the website. My problem is that SEOmoz, crawls only that first page, not the whole website. How can I crawl the whole website?, do you need me to upload a link to the website? Thank you very much Catalin

              Moz Pro | Apr 9, 2013, 5:50 PM | catalinmoraru
              0
            • SkinLaboratory

              How do you check the outbound links of a site?

              There are great tools like http://www.opensiteexplorer.org that will tell you all about the inbound links.  What about the more basic and easier question:  What outgoing links does this site have?

              Moz Pro | Aug 5, 2022, 7:58 PM | SkinLaboratory
              2
            • azjayhawk

              Page Authority is the same on every page of my site

              I'm analyzing a site and the page authority is the exact same for every page in the site. How can this be since the page authority is supposed to be unique to each page?

              Moz Pro | Oct 18, 2012, 6:09 PM | azjayhawk
              0

            Get started with Moz Pro!

            Unlock the power of advanced SEO tools and data-driven insights.

            Start my free trial
            Products
            • Moz Pro
            • Moz Local
            • Moz API
            • Moz Data
            • STAT
            • Product Updates
            Moz Solutions
            • SMB Solutions
            • Agency Solutions
            • Enterprise Solutions
            • Digital Marketers
            Free SEO Tools
            • Domain Authority Checker
            • Link Explorer
            • Keyword Explorer
            • Competitive Research
            • Brand Authority Checker
            • Local Citation Checker
            • MozBar Extension
            • MozCast
            Resources
            • Blog
            • SEO Learning Center
            • Help Hub
            • Beginner's Guide to SEO
            • How-to Guides
            • Moz Academy
            • API Docs
            About Moz
            • About
            • Team
            • Careers
            • Contact
            Why Moz
            • Case Studies
            • Testimonials
            Get Involved
            • Become an Affiliate
            • MozCon
            • Webinars
            • Practical Marketer Series
            • MozPod
            Connect with us

            Contact the Help team

            Join our newsletter
            Moz logo
            © 2021 - 2025 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
            • Accessibility
            • Terms of Use
            • Privacy

            Looks like your connection to Moz was lost, please wait while we try to reconnect.