My website has 18500 pages but my SEO MOZ campaign is limited to a 10,000 page crawl. How can I get the other 8500 pages crawled? Can I use one of my 3 spare campaigns?
-
-
Hello Cesar,
It looks like you have discovered a creative and partial workaround to the 20k page limit by breaking your site into separate sections. Unfortunately, the only way to get more than 20k pages crawled in a single campaign will also be to upgrade your plan to our much more expensive Pro Enterprise plan for $4k/month. With this plan you will receive a whopping 1 million pages crawled per month though! If you're interested in learning more about this plan please contact the help desk at help@seomoz.org
Thanks,
Kenny
-
Hi Mike,
Unfortunately, if you need to have your entire site crawled in a single campaign then the only current option available will be to upgrade your plan to Pro Elite. That will give you up to 20,000 pages crawled per campaign.
I don't know if you're capable of breaking your site into separate chunks like Cesar has but that would be an option too. Each chunk would then be it's own separate campaign.
Did you also know we have a feature request forum too? It's a great place to share your ideas. Other people can vote on them and it will help us determine priorities. You should check it out: http://seomoz.zendesk.com/forums
Thanks,
Kenny
-
Hi Mike and Cesar,
I've asked the help desk to come in and answer this thread here, so it can be on record for other people who may have similar questions. You should be hearing from them in a bit.
Keri
-
I have a similar problem, except that my site has 24k
You can upgrade to PRO Elite Cuena which has a limit of 20k. In my case the problem is more complicated, my site has 24k pages. As I can do to make a tracing of this volume of pages?
now I've solved creating a campaign for each language (which is defined by directories)but this way I can not make the trastreo the English language.
Can you help with this issue?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What to do with a site of >50,000 pages vs. crawl limit?
What happens if you have a site in your Moz Pro campaign that has more than 50,000 pages? Would it be better to choose a sub-folder of the site to get a thorough look at that sub-folder? I have a few different large government websites that I'm tracking to see how they are fairing in rankings and SEO. They are not my own websites. I want to see how these agencies are doing compared to what the public searches for on technical topics and social issues that the agencies manage. I'm an academic looking at science communication. I am in the process of re-setting up my campaigns to get better data than I have been getting -- I am a newbie to SEO and the campaigns I slapped together a few months ago need to be set up better, such as all on the same day, making sure I've set it to include www or not for what ranks, refining my keywords, etc. I am stumped on what to do about the agency websites being really huge, and what all the options are to get good data in light of the 50,000 page crawl limit. Here is an example of what I mean: To see how EPA is doing in searches related to air quality, ideally I'd track all of EPA's web presence. www.epa.gov has 560,000 pages -- if I put in www.epa.gov for a campaign, what happens with the site having so many more pages than the 50,000 crawl limit? What do I miss out on? Can I "trust" what I get? www.epa.gov/air has only 1450 pages, so if I choose this for what I track in a campaign, the crawl will cover that subfolder completely, and I am getting a complete picture of this air-focused sub-folder ... but (1) I'll miss out on air-related pages in other sub-folders of www.epa.gov, and (2) it seems like I have so much of the 50,000-page crawl limit that I'm not using and could be using. (However, maybe that's not quite true - I'd also be tracking other sites as competitors - e.g. non-profits that advocate in air quality, industry air quality sites - and maybe those competitors count towards the 50,000-page crawl limit and would get me up to the limit? How do the competitors you choose figure into the crawl limit?) Any opinions on which I should do in general on this kind of situation? The small sub-folder vs. the full humongous site vs. is there some other way to go here that I'm not thinking of?
Moz Pro | | scienceisrad0 -
A competitor's SEO firm is building spammy links to my website.
A competitor's SEO firm is building spammy links to my website. I know the disavow link process, but even this can be time consuming. Is there a better way to protect my website? Any and all ideas appreciated.... Are there any cool Moz tools that can help me manage this assault? Sorry, just a follow up from me... It appears that many of the spammy links (referrals) I see in Google Analytics are not present in Google WebMaster Tools. Does this mean that the spammy ones that I'm concerned about have already been discounted by Google, or are the spammy links no-follow? Any input on this would also be appreciated.
Moz Pro | | BVREID0 -
Lag time between MOZ crawl and report notification?
I did a lot of work to one of my sites last week and eagerly awaited this week's MOZ report to confirm that I had achieved what I was trying to do, but alas I still see the same errors and warnings in the latest report. This was supposedly generated five days AFTER I made the changes, so why are they not apparent in the new report? I am mainly referring to missing metadata, long page titles, duplicate content and duplicate title errors (due to crawl and URL issues). Why would the new crawl not have picked up that these have been corrected? Does it rely on some other crawl having updated (e.g. Google or Bing)?
Moz Pro | | Gavin.Atkinson0 -
Functionality of SEOmoz crawl page reports
I am trying to find a way to ask SEOmoz staff to answer this question because I think it is a functionality question so I checked SEOmoz pro resources. I also have had no responses in the Forum too it either. So here it is again. Thanks much for your consideration! Is it possible to configure the SEOMoz Rogerbot error-finding bot (that make the crawl diagnostic reports) to obey the instructions in the individual page headers and http://client.com/robots.txt file? For example, there is a page at http://truthbook.com/quotes/index.cfm month=5&day=14&year=2007 that has – in the header -
Moz Pro | | jimmyzig
<meta name="robots" content="noindex"> </meta name="robots" content="noindex"> This page is themed Quote of the Day page and is duplicated twice intentionally at http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2004 and also at http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2010 but they all have <meta name="robots" content="noindex"> in them. So Google should not see them as duplicates right. Google does not in Webmaster Tools.</meta name="robots" content="noindex"> So it should not be counted 3 times? But it seems to be? How do we gen a report of the actual pages shown in the report as dups so we can check? We do not believe Google sees it as a duplicate page but Roger appears too. Similarly, one can use http://truthbook.com/contemplative_prayer/ , here also the http://truthbook.com/robots.txt tells Google to stay clear. Yet we are showing thousands of dup. page content errors when Google Webmaster tools as shown only a few hundred configured as described. Anyone? Jim0 -
SEO moz Report Card
I just ran some on page report cards. As I was playing around with the tool I noticed that I would get different results if I used my primary domain vs a 2nd domain. The main difference was in how the tool was counting keywords on the page. The keyword used was 'vehicle inventory' Primary domain: www.brand-state.com/inventory.htm Title = 1, URL = 0, Meta = 1, H1 = 1, H2-4 = 1 Body =1, Strong = 1, IMG Alt = 1 Total = 7 2nd domain: www.company-name-brand.com/inventory.htm Title = 1, URL = 0, Meta = 1, H1 = 1, H2-4 = 2 Body =5, Strong = 4, IMG Alt = 2 Total = 13 I can understand if the keyword was in the domain, but it's not. So I'm wondering what is going on here - any help or suggestions on what to research would be a great help. Thank you!
Moz Pro | | gormaniavt0 -
Campaign Crawl Report
Hello, Just a quicky, is there anyway I can do a crawl report for something in a campaign so I can compare the changes? I know you can do a separate crawl test, but it wont show the differences,and the next crawl date isnt untill the 28th.
Moz Pro | | Prestige-SEO0 -
How can i get seomoz to crawl a campaign on demand
hi how can i get seomoz to crawl a campaign on demand instead of on a weekly basis? For example i have corrected some error warnings and on page elements and would like it to re crawl the site sooner to see how the corrections have worked? thanks
Moz Pro | | Bristolweb0