2nd Crawl taking too long?
-
Hi,
I've added a campaign to my account with the first crawl taking around a week. The 2nd crawl started 3days 17 hours ago and si still running.
Is this something that others have experienced?
The campaign is tracking 5 keywords and have 17 pages on the site.
Steve
-
There's not a way to manually start your normal crawl, but you can do a crawl test of up to 3000 URLs at any time by going to http://pro.seomoz.org/tools/crawl-test.
If you're having a delay, feel free to contact the help team at http://www.seomoz.org/help
-
Is there any way to manually cancel the craw and restart it manually? I have the same problem and I think perhaps the problem was with my server at the moment the bot passed.
-
Thanks Marcus.
-
I have seen crawls take a while for larger sites, but never that long for a smaller site. I am sure it's nothing to worry about so maybe give it another day or so and then if still no good then maybe email help@seomoz.org if you have had no support on here.
To be honest, the people on here are pretty cool so I would be surprised if someone has not helped you out in the next couple of hours.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
moz crawl is stopped?
moz stopped indexing the links due to some updates? can some one confirm me thanks
Moz Pro | | 42409300125323700 -
FollowerWonk: How long have I been following someone?
Is there a way within followerwonk to find out how long you have been following someone for or indeed how long they have been following you for? I have downloaded the excel document from the "sort your followers" tab Which has sections called. "follows @name" and "@name follows" but these just give a last checked date(if they do follow/are being followed) rather than the metric I want!
Moz Pro | | BBI_Brandboost0 -
SEO on-demand crawl
what happened to the on-demand crawl you could do in PRO when they switched to the new MOZ site?
Moz Pro | | Vertz-Marketing0 -
Our Duplicate Content Crawled by SEOMoz Roger, but Not in Google Webmaster Tools
Hi Guys, We're new here and I couldn't find the answer to my question. Here it goes: We had SEOMoz's Roger Crawl all of our pages and he came up with quite a few erros (Duplicate Content, Duplicate Page Titles, Long URL's). Per our CTO and using our Google Webmaster Tools, we informed Google not to index those Duplicate Content Pages. For our Long URL Errors, they are redirected to SEF URL's. What we would like to know is if Roger is able to know that we have instructed Google to not index these pages. My concern is Should we still be concerned if Roger is still crawling those pages and the errors are not showing up in our Webmaster Tools Is there a way we can let Roger know so they don't come up as errors in our SEOMoz Tools? Thanks so much, e
Moz Pro | | RichSteel0 -
When will be the 250 pages crawled limit eliminated?
Hi, I signed up yesterday for a SEOMoz Pro Account, and would like to know, please, when will be the 250 pages crawled limit eliminated? 🙂 Thanks in advance for your help!
Moz Pro | | Andarilho0 -
Is there any way to manually initiate a crawl through SEOMoz?
... or do you actually have to wait a week for the next scheduled crawl date on a particular campaign? We've just made a ton of changes to our site, and it would be helpful to know if they will generate any warnings or errors sooner rather than later. Thanks!
Moz Pro | | jadeinteractive1 -
How to handle crawl diagnostic errors for the same url. /products & /products/
I have copied on of the errors out of the crawl diagnostics report. Both /products and /products/ are returning an error, and both have pretty good domain authority so I feel like its hurting my site that these show up this way. Both urls create the same page, should I just setup a 301 on the /products with no slash or will that cause more harm... I am using the MODx cms system and that could have something to do with it. | Products | Datalight http://www.datalight.com/products 1 37 5 Products | Datalight http://www.datalight.com/products/ | 1 | 30 | 1 |
Moz Pro | | tjsherrill0 -
Initial Crawl Questions
Hello. I just joined and used the Crawl tool. I have many questions and hoping the community can offer some guidance. 1. I received an Excel file with 3k+ records. Is there a friendly online viewer for the Crawl report? Or is the Excel file the only output? 2. Assuming the Excel file is the only output, the Time Crawled is a number (i.e. 1305798581). I have tried changing the field to a date/time format but that did not work. How can I view the field as a normal date/time such as May 15, 2011 14:02? 3. I use the ™ symbol in my Title. This symbol appears in the output as a few ascii characters. Is that a concern? Should I remove the trademark symbol from my Title? 4. I am using XenForo forum software. All forum threads automatically receive a Title Tag and Meta Description as part of a template. The Crawl Test report shows my Title Tag and Meta Description as blank for many threads. I have looked at the source code of several pages and they all have clean Title tags and I don't understand why the Crawl Report doesn't show them. Any ideas? 5. In some cases the HTTP Status Code field shows a result of "3". Why does that mean? 6. For every URL in the Crawl Report there is an entry in the Referrer field. What exactly is the relationship between these fields? I thought the Crawl Tool would inspect every page on the site. If a page doesn't have a referring page is it missed? What if a page has multiple referring pages? How is that information displayed? 7. Under Google Webmaster Tools > Site Configurations > Settings > Parameter Handling I have the options set as either "Ignore" or "Let Google Decide" for various URL parameters. These are "pages" of my site which should mostly be ignored. For example a forum may have 7 headers, each on of which can be sorted in ascending or descending order. The only page that matters is the initial page. All the rest should be ignored by Google and the Crawl. Presently there are 11 records for many pages which really should only have one record due to these various sort parameters. Can I configure the crawl so it ignores parameter pages? I am anxious to get started on my site. I dove into the crawl results and it's just too messy in it's present state for me to pull out any actionable data. Any guidance would be appreciated.
Moz Pro | | RyanKent0