Crawl test limitaton - ways to take advantage of large sites?
-
Hello
I have a large site (120,000+) and crawl test is limited to 3,000 pages.
I want to know if you have a way to take advantage to crawl a type of this sites. Can i do a regular expression for example?
Thanks!
-
Hi there. Kristina from Moz's Help Team here.
The Crawl Test tool is limited to only 3,000 pages by design as the full Site Crawl that happens within your campaign(s) is a much larger, weekly crawl that occurs.
All Moz Pro Standard & Medium subscription levels (you currently have access to a Standard account) default all Site Crawls within a Campaign at a max of 50,000 pages. If you're interested in crawling more than 50,000 pages within a single Campaign, it would require subscribing to a higher subscription.
You can view all of our subscription plans on our pricing page here: https://moz.com/products/pro/pricing
As always, you can reach out to our team in the future with product questions such as this one by either emailing help@moz.com or clicking on the blue chat box on the lower right hand corner of your screen while in the product itself.
I hope this helps but please let me know if there's anything else I can assist with!
Thank you,
-Kristina
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can I connect my moz crawl to datastudio to list 404s in a table
Hello I currently export 404s and other 'critical' issues to a google sheet and publish them in a monthly report in data studio Is there a way to automate this process so that my monthly report is automatically populated with critical issues
Feature Requests | | Andrew-SEO0 -
Is there a way to view the top 100 highest priority keywords?
I have an opportunity to take over a massive domain name list (several thousand). If at all possible, I'd really love to not have to search each keyword individually in the tool and map out the priority score for each. Is there just a way to view the keywords by highest priority scores? Then potentially download the csv and alphabetize to help me narrow in on domains that would be the most worthwhile to develop? I know it sounds like a big ask, and likely not possible, but never hurts to at least ask about it. Thanks!
Feature Requests | | DanDeceuster1 -
Any way to programatically retrieve a campaign's Search Visibility score?
I would like to retrieve a campaign's Search Visibility score, and preferably historical too, in an automated fashion. However, as far as I know the API has no access to this. Is there something I'm missing? If not, is there a formula I can use to calculate this score?
Feature Requests | | MaddenMediaSEO1 -
Is there a way to take notes on a crawled URL?
I'm trying to figure out the best way to keep track of there different things I've done to work on a page (for example, adding a longer description, or changing h2 wording, or adding a canonical URL. Is there a way to take notes for crawled URLs? If not what do you use to accomplish this?
Feature Requests | | TouchdownTech0 -
Moz crawler is not able to crawl my website
Hello All, I'm facing an issue with the MOZ Crawler. Every time it crawls my website , there will be an error message saying " **Moz was unable to crawl your site on Sep 13, 2017. **Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. " We changed the robots.txt file and checked it . but still the issue is not resolved. URL : https://www.khadination.shop/robots.txt Do let me know what went wrong and wjhat needs to be done. Any suggestion is appreciated. Thank you.
Feature Requests | | Harini.M0 -
Crawl error : 804 https (SSL) error
Hi, I have a crawl error in my report : 804 : HTTPS (SSL) error encountered when requesting page. I check all pages and database to fix wrong url http->https but this one persist. Do you have any ideas, how can i fix it ? Thanks website : https://ilovemypopotin.fr/
Feature Requests | | Sitiodev0 -
Is there a way to filter the new KW lists by KW's that are triggering certain SERP features? (Videos, Quick Answers, etc...)
I've been creating KW lists using the new Moz Pro tool and I love the quick visualization of all the different serp features those keywords are triggering in aggregate, however I'd love to be able to filter the list to see which keywords are triggering videos without having to do a SERP Analysis one at a time to see the listings. Is there a way to do that? Thanks!
Feature Requests | | digitasseo0