Campaign Crawl
-
I have a site with 8036 pages in my sitemap index. But the MozBot only Crawled 2169 pages. It's been several months and each week it crawls roughly the same number of pages. Any idea why I'm not getting fully crawled?
-
The best and most efficient solution with any question regarding Moz crawls or tools is to go directly to them with questions (sometimes there's enough unique behind the scenes stuff going on that forum people don't have access to). Their team is great and has helped me a handful of times, quickly and politely.
-
Hi Jack,
Server errors. Redirect loops. Disallowed pages in robots.txt. etc.
Here are some suggestions from www.seomoz.org/help/crawl-diagnostics:
"Why didn’t you crawl all my pages? I only got a one page crawl. Looks like you missed a bunch!
If you suspect you didn’t get a full crawl, or Rogerbot missed some of your pages, there could be several reasons why this happens.
- We only crawl a maximum of 400 links per page. If several pages of your site all have the same 400 links on each page, we may not discover all the pages on your site. Try optimizing your navigation to reduce the number of links.
- Does your navigation rely on JavaScript? Can visitors navigate your site with JavaScript disabled? SEOmoz doesn’t crawl JavaScript, so make sure your links work in all browsing environments.
- Does your site consist of multiple subdomains? Crawls are restricted to the subdomain you set your campaign up on. This means that in general, we don't crawl multiple subdomains. You can solve this by specifying a “Root Domain” crawl in the setup process. (This requires starting a new campaign.)"
If you are certain you don't have any of the above problems, then I would suggest contacting help@seomoz.org
Good luck.
Mike
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to make a Crawl Report readable?
Hi! I am trying to find out how I make my CSV report neat, so I can interprete the report. I know have a CSV report with just numbers and text all in one column. I tried the button text to columns but that doesn't work because when I do that at column A it overwrites column B in which I have the same problem! Thanks
Moz Pro | | HetCommunicatielokaal0 -
Getting warning message when attach facebook account with my campaign
Hi I am getting warning message "Our access to this account will expire in 1¼ hours. Please reauthorize now to ensure that we can continue to collect data for this account." When i attach feacbook account with in campaign and social tab of seomoz application. can you tell me what is wrong here? Thank you.
Moz Pro | | Webworld_Norway0 -
Setting a campaign in SEOmoz
Hi All, when i set a new campaign i am asked to decide if to use: domin-name.com Or www.domain-name.com Can someone please explain the different in terms of the campaign If i use: domain-name.com - will the campaign run on www.domain-name.com too? Thank you SEOwise
Moz Pro | | iivgi0 -
Order of urls in SEOMoz crawl report
Is there any rhyme or reason to the order of urls in the SEOMoz crawl report, or are the urls just listed in random order?
Moz Pro | | LynnMarie0 -
Why does Crawl Diagnostics report this as duplicate content?
Hi guys, we've been addressing a duplicate content problem on our site over the past few weeks. Lately, we've implemented rel canonical tags in various parts of our ecommerce store, over time, and observing the effects by both tracking changes in SEOMoz and Websmater tools. Although our duplicate content errors are definitely decreasing, I can't help but wonder why some URLs are still being flagged with duplicate content by our SEOmoz crawler. Here's an example, taken directly from our Crawl Diagnostics Report: URL with 4 Duplicate Content errors:
Moz Pro | | yacpro13
/safety-lights.html Duplicate content URLs:
/safety-lights.html ?cat=78&price=-100
/safety-lights.html?cat=78&dir=desc&order=position /safety-lights.html?cat=78 /safety-lights.html?manufacturer=514 What I don't understand, is all of the URLS with URL parameters have a rel canonical tag pointing to the 'real' URL
/safety-lights.html So why is SEOMoz crawler still flagging this as duplicate content?0 -
Initiate crawl
Anyway to start the crawl of a site immediately after changes have been made? Or must you wait for the next scheduled crawl? Thanks.
Moz Pro | | dave_whatsthebigidea.com0 -
A question about Mozbot and a recent crawl on our website.
Hi All, Rogerbot has been reporting errors on our website's for over a year now, and we correct the issues as soon as they are reported. However I have 2 questions regarding the recent crawl report we got on the 8th. 1.) Pages with a "no-index" tag are being crawled by roger and are being reported as duplicate page content errors. I can ignore these as google doesnt see these pages, but surely roger should ignore pages with "no-index" instructions as well? Also, these errors wont go away in our campaign until Roger ignores the URL's. 2.) What bugs me most is that resource pages that have been around for about 6 months have only just been reported as being duplicate content. Our weekly crawls have never picked up these resources pages as being a problem, why now all of a sudden? (Makes me wonder how extensive each crawl is?) Anyone else had a similar problem? Regards GREG
Moz Pro | | AndreVanKets0