How long should the weekly crawl take
-
Mine started yesterday afternoon and it's now almost 11pm on Sunday. 30+ hours and still not finished (and no progress indicator). 438 pages quoted as being crawled. That's not normal - right?
I have made a bunch of changes based on last weeks crawl so I have been eagerly waiting for this to finish But 30 hours?....
Thanks.
Mark
-
Hi Mark,
I've forwarded this to the crawl service team and they are looking into it.
Sorry you've had an unexpected delay, and we'll get it on track for you as soon as possible.
Walt
-
Still running and approaching two days now
My first month trial is up today. This has got me thinking about whether to continue. 40+ hours just seems too much or a small web site such as mine.
Mark
-
Thank you, Joshua. Just to clarify this is not the initial crawl (that happened a couple of weeks ago). This is the third week for me. I am pretty sure the first two were way quicker than this,
Thanks again.
Mark
-
I've had a couple instances where that has happened. It's not an answer you are going to love, but wait another 24 hours and let me know if it hasn't completed.
The first one is always the longest and after that it slots into a routine that you can bank on.
Hope that helps pal.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why did Moz crawl our development site?
In our Moz Pro account we have one campaign set up to track our main domain. This week Moz threw up around 400 new crawl errors, 99% of which were meta noindex issues. What happened was that somehow Moz found the development/staging site and decided to crawl that. I have no idea how it was able to do this - the robots.txt is set to disallow all and there is password protection on the site. It looks like Moz ignored the robots.txt, but I still don't have any idea how it was able to do a crawl - it should have received a 401 Forbidden and not gone any further. How do I a) clean this up without going through and manually ignoring each issue, and b) stop this from happening again? Thanks!
Moz Pro | | MultiTimeMachine0 -
Solving URL Too Long Issues
Moz.com is reporting that many URL's are to long, these particularly affect Product URL's where the URL is typically https://www.domainname.com/collections/category-name/products/product-name, (You guessed it we're using Shopify). However, we use Canonicals that ignore all most of the URL and are just structured https://www.domainname.com/product-name, so Google should be reading the Canonical and not the long-winded version. However, Moz cannot seem to spot this... does anyone else have this problem and how to solve so that we can satisfy the Moz.com crawl engine?
Moz Pro | | Tildenet0 -
Why is MOZ crawl is returning URLs with variable results showing Missing Meta Desc? Example: http://nw-naturals.net/?page_number_0=47
Can you help me dive down into my website guts to find out why the MOZ crawl is returning URLs with variable results? And saying this is missing a description when it's not really a page? Example: http://nw-naturals.net/?page_number_0=47. I've asked MOZ but it's a web development issue so they can't help me with it. Has anyone had an issue with this on their website? Thank you!
Moz Pro | | lewisdesign0 -
Seomoz crawling filtered pages
Hi, I just checked an seo campaign we started last week, so I opened seomoz to see the crawl diagnostics. Lot's of duplicate content & duplicate titles showing up, but that's because Rogerbot is crawling all of the filtered pages as well. How do I exclude these pages from being crawled? /product/brand-x/3969?order=brand&sortorder=ASC
Moz Pro | | nvs.nim
/product/brand-x/3969?order=popular&sortorder=ASC
/product/brand-x/3969?order=popular&sortorder=DESC&page=10
/product/brand-x/3969?order=popular&sortorder=DESC&page=110 -
Crawl Diagnostics finding pages that dont exist. Will Rel Canon Help?
I have recently set up a campaign for www.completeoffice.co.uk. Im the in-house developer there. When the crawl diagnostics completed, i went to check the results, and to my surprise, it had well over 100 missing or empty title tags. I then clicked it to see what pages, and nearly all the pages it say have missing or empty title tags, DO NOT EXIST. This has really confused me and need help figuring out how to solve this. Can anyone help? Attached image is a screen shot of some of the links it showed me on crawl diagnostics, nearly all of these do not exist. Will the relation Canonical tag in the head section of the actual pages help? For example, The actual page that exist is: www.completeoffice.co.uk/Products.php Whereas, when crawled it actually showed www.completeoffice.co.uk/Products/Products.php Will have the rel can tag in the header of the real products.php solve this?
Moz Pro | | CompleteOffice0 -
How do you get Mozbot to crawl your website
I trying to get the mozbot to crawl my site so I can get new crawl diagnostics info. Anyone know how this can be done?
Moz Pro | | Romancing0 -
Crawl Diagnostics bringing 20k+ errors as duplicate content due to session ids
Signed up to the trial version of Seomoz today just to check it out as I have decided I'm going to do my own SEO rather than outsource it (been let down a few times!). So far I like the look of things and have a feeling I am going to learn a lot and get results. However I have just stumbled on something. After Seomoz dones it's crawl diagnostics run on the site (www.deviltronics.com) it is showing 20,000+ plus errors. From what I can see almost 99% of this is being picked up as erros for duplicate content due to session id's, so i am not sure what to do! I have done a "site:www.deviltronics.com" on google and this certainly doesn't pick up the session id's/duplicate content. So could this just be an issue with the Seomoz bot. If so how can I get Seomoz to ignore these on the crawl? Can I get my developer to add some code somewhere. Help will be much appreciated. Asif
Moz Pro | | blagger0