Crawl Test - Taking too long
-
The last crawl test I invoked seems to be in progress for over 24 hours. The one before that completed in a few hours. Wish there was a progress indicator or an option to cancel.
The crawl (from Tool > Crawl Test) should not take this long. Any ideas or suggestions?
Also, the keyword research tool (plus a few others) have been down ever since I signed up. Is this a normal?
-
Sorry for the long wait, I had to tell Roger that he was a bad boy/girl/thing? We have resolve the crawler issues and your reports should be updating and reporting as it should. I am very sorry that Roger caused some concerns, it said it won't do it next time.
Thank you very much for your patience
-
Thanks Peter for the response. Keyword tool start to work but Crawl Test is still pending, it has been over 3 days. I need it to be stopped/cancelled it so I can re-run it. Not being able to crawl and check the changes that were made on the site brings me to a stopping point.
Looking through some of the other Q/A, one of the suggestions was to stop the crawl was to stop it via robots.txt, which didn't worked. I don't think it is even doing anything but stuck in the "in progress" state.
-Mo
-
Hey MomoMasta,
Thanks for reaching out to us! I'm sorry for the inordinate amount of time that it took for the crawl to complete (which it should be at the moment). I will have made a note on the issue so our team can take a look at it. I like that feature request (to cancel), you should definitely suggest it in features request forum at :https://seomoz.zendesk.com/forums/293194-seomoz-pro-feature-requests.
I am very sorry for the keyword feature being down due to the recent Google search result change. Our company has learned a lot from this last outage and will have contingency plans in next time Google releases another update. I want to thank you for your patience and hope you will continue being our friend/customer http://screencast.com/t/XaxBDAakziS
~Peter
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What to do with a site of >50,000 pages vs. crawl limit?
What happens if you have a site in your Moz Pro campaign that has more than 50,000 pages? Would it be better to choose a sub-folder of the site to get a thorough look at that sub-folder? I have a few different large government websites that I'm tracking to see how they are fairing in rankings and SEO. They are not my own websites. I want to see how these agencies are doing compared to what the public searches for on technical topics and social issues that the agencies manage. I'm an academic looking at science communication. I am in the process of re-setting up my campaigns to get better data than I have been getting -- I am a newbie to SEO and the campaigns I slapped together a few months ago need to be set up better, such as all on the same day, making sure I've set it to include www or not for what ranks, refining my keywords, etc. I am stumped on what to do about the agency websites being really huge, and what all the options are to get good data in light of the 50,000 page crawl limit. Here is an example of what I mean: To see how EPA is doing in searches related to air quality, ideally I'd track all of EPA's web presence. www.epa.gov has 560,000 pages -- if I put in www.epa.gov for a campaign, what happens with the site having so many more pages than the 50,000 crawl limit? What do I miss out on? Can I "trust" what I get? www.epa.gov/air has only 1450 pages, so if I choose this for what I track in a campaign, the crawl will cover that subfolder completely, and I am getting a complete picture of this air-focused sub-folder ... but (1) I'll miss out on air-related pages in other sub-folders of www.epa.gov, and (2) it seems like I have so much of the 50,000-page crawl limit that I'm not using and could be using. (However, maybe that's not quite true - I'd also be tracking other sites as competitors - e.g. non-profits that advocate in air quality, industry air quality sites - and maybe those competitors count towards the 50,000-page crawl limit and would get me up to the limit? How do the competitors you choose figure into the crawl limit?) Any opinions on which I should do in general on this kind of situation? The small sub-folder vs. the full humongous site vs. is there some other way to go here that I'm not thinking of?
Moz Pro | | scienceisrad0 -
Site Crawl Error
In moz crawling error this message is appears: MOST COMMON ISSUES 1Search Engine Blocked by robots.txt Error Code 612: Error response for robots.txt i asked help staff but they crawled again and nothing changed. there's only robots.XML (not TXT) in root of my webpage it contains: User-agent: *
Moz Pro | | nopsts
Allow: /
Allow: /sitemap.htm anyone please help me? thank you0 -
How Moz takes a page title is duplicate?
Suppose i have added suffix and prefix to each of my product (ex: i have two tittles like buy online t-shirt at abc.com & buy online poster at abc.com, so in this buy online and abc.com are suffix and prefix) so .. will it take these two page tittles as duplicates ?
Moz Pro | | vayush0 -
Question #3) My last question has to do with Some SEOmoz crawl diagnostics -
I recently fixed (or well, I am asking to make sure that this was the right thing to do in my first question posted a few minutes ago), a problem where all of my internal main sidebar category pages were linking using https://, which to my knowledge means SECURE pages. anyways, OSE, and google seem to be not recognizing the link juice. but my rank fell for one of my main keywords by 2 positions about a week after i made the fix to have the pages be indexable. Making my pages properly linked can't be a bad thing right? That's what I said. So I looked deeper, and my crawl diagnostics reports showed a MASSIVE reduction in warnings (about 3,000 301 redirects were removed by changing the https:// to http:// because all the secure pages were re-directing to http:// regular structure) and an INCREASE, in Duplicate Page Titles, and Temporary redirects... Could that have been the reason the rank dropped? I think I am going to fix all the Duplicate Page Title problems tonight, but still, I am a little confused as to why such a major fix didn't help and appeared to hurt me. I feel like it hurt the rank, not because of what I did, but because what I did caused a few extra re-directs, and opened the doors for the search engine to discover more pages that had problems (which could have triggered an algo that says hey, these people have to much duplicate problems) Any thoughts will be GREATLY appreciated thumbed, thanked, and marked as best answers! Thanks in advance for your time, Tyler A.
Moz Pro | | TylerAbernethy0 -
Crawl Diagnostics
Hello, I would appreciate your help on the following issue. During Crawl procedure of e-maximos.com (WP installation) I get a lot of errors of the below mentioned categories: Title Missing or Empty & Missing Meta Description Tag for the URLs: http://e-maximos.com/?like_it=xxxx (i.e. xxxx=1033) Any idea of the reason and possible solution. Thank you in advance George
Moz Pro | | gpapatheodorou0 -
How Would You Plan Long Term SEO (1 year and more)?
Hi, I'm new to SEO and learning fast. Myself joined together with my friends have set up a string of websites each for different products to sell online. To start with we have finalized keywords, optimized the on-page using the on-page analysis tool and right now about to start working on link building. For the first month we have planned to do around 20 bookmarking, 3 articles each to 15 article directories, 30 directory submissions (priority to niche based), business page listing in hot frog etc, 3 articles each submitted to 10 web 2.0 properties, 1 press release to 15 pres release sites, some Q&A links and few blog comments, 10 video submission and 1 guest blogging. We have also completed setting up facebook fan page and twitter account and active in them too. For anchor text diversity we will be using keywords only in links from article submission and web 2.0. For links from other methods, the anchor text will be either website name or website url. And we will be targeting 4 keywords per website (2 keywords for home page and 2 for 2 sub-pages). The difficulty level of the keywords range from 40% to 60%. Now, I have few questions which I believe the experts over here can help. 1. For the first month we have planned the above link building, but hpw build links from different websites in coming month? 2. For web 2.0 properties we can keep adding articles to the same blog we have created or we need to create separate set of web 2.0 properties for each month. 3. Are we missing any link building methods or strategy? If so, can you please tell me the method? I know some of the questions might be silly, but being a beginner it would be a great help to know the answer for these questions from this community. Thanks, Sridhar
Moz Pro | | chosenindian0 -
SEOmoz Dashboard Report: Crawl Diagnostic Summary
Hi there, I'm noticing that the total errors for our website has been going up and down drastically almost every other week. 4 weeks ago there were over 10,000 errors. 2 weeks ago there were barely 1,000 errors. Today I'm noticing it's back to over 12,000 errors. It says the majority of the errors are from duplicate page content & page title. We haven't made any changes to the titles or the content. Some insight and explanation for this would be much appreciated. Thanks, Gemma
Moz Pro | | RBA1 -
SEOmoz Crawl CSV in Excel: already split by semicolon. Is this Excel's fault or SEOmoz's?
If for example a page title contains a ë the .csv created by the SEOmoz Crawl Test is already split into columns on that point, even though I haven't used Excel's text to columns yet. When I try to do the latter, Excel warns me that I'm overwriting non-empty cells, which of course is something I would rather not do since that would make me lose valuable data. My question is: is this something caused by opening the .csv in Excel, or earlier in the process when this .csv is created?
Moz Pro | | DeptAgency2