Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
What to do with a site of >50,000 pages vs. crawl limit?
-
What happens if you have a site in your Moz Pro campaign that has more than 50,000 pages?
Would it be better to choose a sub-folder of the site to get a thorough look at that sub-folder?
I have a few different large government websites that I'm tracking to see how they are fairing in rankings and SEO. They are not my own websites. I want to see how these agencies are doing compared to what the public searches for on technical topics and social issues that the agencies manage. I'm an academic looking at science communication. I am in the process of re-setting up my campaigns to get better data than I have been getting -- I am a newbie to SEO and the campaigns I slapped together a few months ago need to be set up better, such as all on the same day, making sure I've set it to include www or not for what ranks, refining my keywords, etc.
I am stumped on what to do about the agency websites being really huge, and what all the options are to get good data in light of the 50,000 page crawl limit. Here is an example of what I mean:
To see how EPA is doing in searches related to air quality, ideally I'd track all of EPA's web presence.
www.epa.gov has 560,000 pages -- if I put in www.epa.gov for a campaign, what happens with the site having so many more pages than the 50,000 crawl limit? What do I miss out on? Can I "trust" what I get?
www.epa.gov/air has only 1450 pages, so if I choose this for what I track in a campaign, the crawl will cover that subfolder completely, and I am getting a complete picture of this air-focused sub-folder ... but (1) I'll miss out on air-related pages in other sub-folders of www.epa.gov, and (2) it seems like I have so much of the 50,000-page crawl limit that I'm not using and could be using. (However, maybe that's not quite true - I'd also be tracking other sites as competitors - e.g. non-profits that advocate in air quality, industry air quality sites - and maybe those competitors count towards the 50,000-page crawl limit and would get me up to the limit? How do the competitors you choose figure into the crawl limit?)
Any opinions on which I should do in general on this kind of situation? The small sub-folder vs. the full humongous site vs. is there some other way to go here that I'm not thinking of?
-
Hi Sean -- Can you clarify for me how competitors in a campaign figure in to the 50,000 page limit? Does the main page in the campaign get thoroughly crawled first and then competitors are crawled up to the limit?
Some examples:
If the main site is 100 pages, and I pick 2 competitors that are 100 to 1000 pages and a 3rd gargantuan competitor of 300,000 pages, what happens? Does it matter in what order I enter competitors in this situation as to whether the 100-page and 1000-page competitors get crawled vs. whether the limit maxes out on the 300K competitor before crawling the smaller competitors?
If the main site is 300,000 pages, do any competitors in the campaign just not get crawled at all because the 50,000 limit gets all used up on the main site?
What if the main site is 20,000 pages and a competitor is 45,000 pages? Thorough crawl of main site and then partial crawl of competitor?
I feel like I have a direction to go in based on our previous discussion for the main site in the campaign, but now I'm still a little stumped and confused about how competitors operate within the crawl limit.
-
Hi There,
Thanks for writing us and this is a tricky one because it is difficult to say if there is an objectively right answer. In this case your best bet would be to create a sub folder that is under the standard subscription campaign limit and attempting to pick up what you miss using the other research tools. Although, our research tools are predominantly designed for one off interactions, you could probably use them to capture information that is a bit outside of the campaigns purview. Here is a link to our research tools for your reference: moz.com/researchtools/ose/
If you do decide to enter a website that far surpasses the crawl limits then, what will be cut off is determined by the existing site structure. The way that our crawler works is that it will go from the link provided and use the existing link structure to keep crawling the site or until we run into a dead end.
Both approaches may present issues so it will be more of a judgement call. One thing that I will say is that we have a much easier time crawling fewer pages so that may be something to keep in mind.
Hope this helps and if you have any questions for me please let me know.
Have a fantastic day!
-
Thanks Patrick for the tip about ScreamingFrog! I checked out the link you shared, and it looks like a powerful tool. I'm going to put it on my list of additional tools I need to get going on using.
In the meantime, though, I still need a strategy for what to do in Moz. Any opinions on whether I should set my Moz campaigns to the smaller sub-folders of a few thousand pages vs. the humongous full sites of 100,000+ pages? I guess I'm leaning towards setting them to the smaller sub-folders. Or maybe I should do a small sub-folder for one of the huge sites and do the full site for another campaign, and see what kind of results I get.
-
Hi there
I would look into ScreamingFrog - you can crawl 500 URIs for free, otherwise, if you have a license, you can crawl as many pages as you'd like.
Let me know if this helps! Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
WEbsite cannot be crawled
I have received the following message from MOZ on a few of our websites now Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. I have spoken with our webmaster and they have advised the below: The Robots.txt file is definitely there on all pages and Google is able to crawl for these files. Moz however is having some difficulty with finding the files when there is a particular redirect in place. For example, the page currently redirects from threecounties.co.uk/ to https://www.threecounties.co.uk/ and when this happens, the Moz crawler cannot find the robots.txt on the first URL and this generates the reports you have been receiving. From what I understand, this is a flaw with the Moz software and not something that we could fix form our end. _Going forward, something we could do is remove these rewrite rules to www., but these are useful redirects and removing them would likely have SEO implications. _ Has anyone else had this issue and is there anything we can do to rectify, or should we leave as is?
Moz Pro | | threecounties0 -
Should I set blog category/tag pages as "noindex"? If so, how do I prevent "meta noindex" Moz crawl errors for those pages?
From what I can tell, SEO experts recommend setting blog category and tag pages (ie. "http://site.com/blog/tag/some-product") as "noindex, follow" in order to keep the page quality of indexable pages high. However, I just received a slew of critical crawl warnings from Moz for having these pages set to "noindex." Should the pages be indexed? If not, why am I receiving critical crawl warnings from Moz and how do I prevent this?
Moz Pro | | NichGunn0 -
Is one page with long content better than multiple pages with shorter content?
(Note, the site links are from a sandbox site and has very low DA or PA) If you look at this page, you will see at the bottom a lengthy article detailing all of the properties of the product categories in the links above. http://www.aspensecurityfasteners.com/Screws-s/432.htm My question is, is there more SEO value in having the one long article in the general product category page, or in breaking up the content and moving the sub-topics as content to the more specific sub-category pages? e.g. http://www.aspensecurityfasteners.com/Screws-Button-Head-Socket-s/1579.htm
Moz Pro | | AspenFasteners
http://www.aspensecurityfasteners.com/Screws-Cap-Screws-s/331.htm
http://www.aspensecurityfasteners.com/Screws-Captive-Panel-Scre-s/1559.htm0 -
How to find missing or incorrect title tags with a site hosting lots of pages.
i have a website that features more than 9,000 pages. i'm trying to figure out which ones have missing or incorrect title tags. Should I start with screaming frog??
Moz Pro | | SapphireCo0 -
Compare sites?
I'm frustrated, so want to ask a stupid question....My site.. www.seadwellers.com outranks my biggest competitor in most Moz catagories... www.rainbowreef.us ...EXCEPT Facebook likes...(he has a ton) **And yet, rainbowreef.us outranks me in most keywords on
Moz Pro | | sdwellers
Google?! I know it's not simple...but Can anyone take a quick peek and give me any insight as to why??? ** Example "Dive Key Largo" keyword...he is #1 and I am #5...typical in the most important keywords!0 -
Canonical URLs all show trailing slash on main site pages - using Yoast SEO for Wordpress - how to correct
We are using Yoast for a number of our sites. We use naked domain as the canonical. I have noticed in the header tags that all our sites show the canonical URLs as having a trailing slash: Example: http;//foxspizzajc.com, when I look at the source code, it shows the canonical as http;//foxspizzajc.com/ Of course, it is much more likely that all sites that link to us will not use the trailing slash - so preferably we do not want that to be the canonical - among other reasons. Does this need to be fixed so the trailing slash is removed? I cannot see how to do this in Yoast SEO or in Permalinks structure for Wordpress. Sorry for my ignorance. Thanks for any help.
Moz Pro | | Adam_RushHour_Marketing1 -
How long for authority to transfer form an old page to a new page via a 301 redirect? (& Moz PA score update?)
Hi How long aproximately does G take to pass authority via a 301 from an old page to its new replacement page ? Does Moz Page Authority reflect this in its score once G has passed it ? All Best
Moz Pro | | Dan-Lawrence
Dan3 -
How can I reduce the number of links on a page and keep the site easy to navigate?
The SEOmoz Site Crawl indicates that we have too many on page links on over 9,970 pages. This is an ecommerce site with a large number of categories. I have a couple of questions regarding this issue: How important is the "too many on page links" factor to SEO? What are some methods of reducing the number of links when there are a large number of categories? We have main categories with dropdown menus currently and have found that they are used to browse and shop the store.
Moz Pro | | afmaury1