Crawl Diagnostic | Starter Crawl taken 14hrs.. so far
-
We started a starter crawl 14hrs ago and it's still going, can anyone help on why this is taking so long, when it says '2 hrs' on the interface..
Thanks,
Rory
-
Hi Rory. Most of our help desk is on holiday today, since it's the Fourth of July in the states. We do have a record of your ticket and one other person who is having a slow starter crawl, and a help desk specialist is looking into this now. Sorry for the delays.
Keri
-
I've asked — now heard yet, think i'll wait to hear.
Thanks for your help, appreciate it.
-
Send an email to help (at) seomoz.org for someone to have a look.
-
It's a fairly big site, but it does say:
'To get you started quickly Roger is crawling up to 250 pages on your site. You should see these results within two hours. The full crawl will complete within 7 days.'
There's no option to do anything else, like cancel, reset etc — it just says 'Starter crawl in progress', it's been 16hrs now + bit frustraing as needed to send this through to a client this morning.. Anyone from SeoMoz around to look into this?
-
And here is how you reset the crawl:
1. On your webserver, edit the robots.txt file.
2. Block the seomoz bot from crawling the site by blocking its access to the root.
You can do so by adding the following lines:
User-agent: rogerbot
Disallow: /
This would end the crawl session.
But, before you do this, it may a good idea to check if your site indeed has a lot of content and outgoing links?
-
Rory,
What is the sub-domain that you are crawling? It may just be that there is a lot of content to crawl.
-
How would I reset the crawl? I don't appear to have an option to?
-
Rory,
I would guess that this crawl session has hung-up; it would be a good idea to start a new session. The session could have been left in the middle due to a server side issue on your website or a temporary drop in connection between the API server and your website's server.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Functionality of SEOmoz crawl page reports
I am trying to find a way to ask SEOmoz staff to answer this question because I think it is a functionality question so I checked SEOmoz pro resources. I also have had no responses in the Forum too it either. So here it is again. Thanks much for your consideration! Is it possible to configure the SEOMoz Rogerbot error-finding bot (that make the crawl diagnostic reports) to obey the instructions in the individual page headers and http://client.com/robots.txt file? For example, there is a page at http://truthbook.com/quotes/index.cfm month=5&day=14&year=2007 that has – in the header -
Moz Pro | | jimmyzig
<meta name="robots" content="noindex"> </meta name="robots" content="noindex"> This page is themed Quote of the Day page and is duplicated twice intentionally at http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2004 and also at http://truthbook.com/quotes/index.cfm?month=5&day=14&year=2010 but they all have <meta name="robots" content="noindex"> in them. So Google should not see them as duplicates right. Google does not in Webmaster Tools.</meta name="robots" content="noindex"> So it should not be counted 3 times? But it seems to be? How do we gen a report of the actual pages shown in the report as dups so we can check? We do not believe Google sees it as a duplicate page but Roger appears too. Similarly, one can use http://truthbook.com/contemplative_prayer/ , here also the http://truthbook.com/robots.txt tells Google to stay clear. Yet we are showing thousands of dup. page content errors when Google Webmaster tools as shown only a few hundred configured as described. Anyone? Jim0 -
SEO Crawl Report Images?
Does SEOMOZ crawl images in the report? Raven tools is showing me about 200 missing alt tags and title tags. I can not seem to find any of this information on the SEOMOZ report. Am I missing something?
Moz Pro | | jasonsixtwo0 -
Why is my crawl STILL in progress?
I'm a bit new here, but we've had a few crawls done already. They are always finished by Wednesday night. Our website is not large (by any means), but the crawl still says it's in progress now 3 days later. What's the deal here?!?
Moz Pro | | Kibin0 -
Should I be running my crawl on our www address or our non-www address?
I currently run our crawl on oursitename.com, but am wondering if it should be run on www.oursitename.com instead.
Moz Pro | | THMCC0 -
Crawl Test taking 10+ days and still "In Progress" - normal or glitch?
I started a crawl test for my large site - WallStreetOasis.com - on June 20 and still have not received my results. It still says "Crawl in Progress" 10 days later. Does this seem odd or problematic, or is this normal?
Moz Pro | | WallStreetOasis.com0 -
My Campaign has been crawling for about a week now
Can anyone tell me why one of my campaigns has been stuck in crawl mode for about a full week and it is still not done?!?!
Moz Pro | | nazmiyal0 -
Schedule crawls for 2 subdomains every 24 hours
I saw at this link: http://pro.seomoz.org/tools/crawl-test "As a PRO member, you can schedule crawls for 2 subdomains every 24 hours, and you'll get up to 3,000 pages crawled per subdomain." However I am having trouble finding where to schedule this 24 hour crawl in my Pro Dashboard. I did not see the option for this setting in the crawl diagnostics tab or in the campaign settings section from the dashboard home page. Can you help? thanks! Michael
Moz Pro | | texmeix0