Crawl Test - Taking too long
-
The last crawl test I invoked seems to be in progress for over 24 hours. The one before that completed in a few hours. Wish there was a progress indicator or an option to cancel.
The crawl (from Tool > Crawl Test) should not take this long. Any ideas or suggestions?
Also, the keyword research tool (plus a few others) have been down ever since I signed up. Is this a normal?
-
Sorry for the long wait, I had to tell Roger that he was a bad boy/girl/thing? We have resolve the crawler issues and your reports should be updating and reporting as it should. I am very sorry that Roger caused some concerns, it said it won't do it next time.
Thank you very much for your patience
-
Thanks Peter for the response. Keyword tool start to work but Crawl Test is still pending, it has been over 3 days. I need it to be stopped/cancelled it so I can re-run it. Not being able to crawl and check the changes that were made on the site brings me to a stopping point.
Looking through some of the other Q/A, one of the suggestions was to stop the crawl was to stop it via robots.txt, which didn't worked. I don't think it is even doing anything but stuck in the "in progress" state.
-Mo
-
Hey MomoMasta,
Thanks for reaching out to us! I'm sorry for the inordinate amount of time that it took for the crawl to complete (which it should be at the moment). I will have made a note on the issue so our team can take a look at it. I like that feature request (to cancel), you should definitely suggest it in features request forum at :https://seomoz.zendesk.com/forums/293194-seomoz-pro-feature-requests.
I am very sorry for the keyword feature being down due to the recent Google search result change. Our company has learned a lot from this last outage and will have contingency plans in next time Google releases another update. I want to thank you for your patience and hope you will continue being our friend/customer http://screencast.com/t/XaxBDAakziS
~Peter
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
FollowerWonk: How long have I been following someone?
Is there a way within followerwonk to find out how long you have been following someone for or indeed how long they have been following you for? I have downloaded the excel document from the "sort your followers" tab Which has sections called. "follows @name" and "@name follows" but these just give a last checked date(if they do follow/are being followed) rather than the metric I want!
Moz Pro | | BBI_Brandboost0 -
How to force SeoMoz to re-crawl my website?
Hi, I have done a lot of changes on my website to comply with SeoMoz advices, now I would like to see if I have better feedback from the tool, how can I force it to re-crawl a specific campaign? (waiting another week is too long :-))
Moz Pro | | oumma0 -
Crawl Errors from URL Parameter
Hello, I am having this issue within SEOmoz's Crawl Diagnosis report. There are a lot of crawl errors happening with pages associated with /login. I will see site.com/login?r=http://.... and have several duplicate content issues associated with those urls. Seeing this, I checked WMT to see if the Google crawler was showing this error as well. It wasn't. So what I ended doing was going to the robots.txt and disallowing rogerbot. It looks like this: User-agent: rogerbot Disallow:/login However, SEOmoz has crawled again and it still picking up on those URLs. Any ideas on how to fix? Thanks!
Moz Pro | | WrightIMC0 -
Crawl Diagnostics Warnings - Duplicate Content
Hi All, I am getting a lot of warnings about duplicate page content. The pages are normally 'tag' pages. I have some news stories or blog posts tagged with multiple 'tags'. Should I ask google not to index the tag pages? Does it really affect my site? Thanks
Moz Pro | | skehoe0 -
Slowing down SEOmoz Crawl Rate
Is there a way to slow down SEOmoz crawl rate? My site is pretty huge and I'm getting 10k pages crawled every week, which is great. However I sometimes get multiple page requests in one second which slows down my site a bit. If this feature exists I couldn't find it, if it doesn't, it's a great idea to have, in a similar way to how Googlebot do it. Thanks.
Moz Pro | | corwin0 -
Crawl reports urls with duplicate content but its not the case
Hi guys!
Moz Pro | | MakMour
Some hours ago I received my crawl report. I noticed several records with urls with duplicate content so I went to open those urls one by one.
Not one of those urls were really with duplicate content but I have a concern because website is about product showcase and many articles are just images with href behind them. Many of those articles are using the same images so maybe thats why the seomoz crawler duplicate content flag is raised. I wonder if Google has problem with that too. See for yourself how it looks like: http://by.vg/NJ97y
http://by.vg/BQypE Those two url's are flagged as duplicates...please mind the language(Greek) and try to focus on the urls and content. ps: my example is simplified just for the purpose of my question. <colgroup><col width="3436"></colgroup>
| URLs with Duplicate Page Content (up to 5) |0 -
Crawl Diagnostics Report
I'm a bit concerned about the results I'm getting from the Crawl Diagnostics Report. I've updated the site with canonical urls to remove duplicate content and when I check the site - it all displays the right values, but the report, which has just finished crawling is still showing a lot of pages as duplicate content. Simple example: http://www.domain.com http://www.domain.com/ Both of them are in the duplicate content section although both have canonical url set as: Does each crawl check the entire site from the beginning or just the pages it didn't have a chance to crawl the last time? This is just one of 333 duplicate content pages, which have canonical url pointing to the right page. Can someone please explain?
Moz Pro | | coremediadesign0 -
How to Stop SEOMOZ from Crawling a Sub-domain without redoing the whole campaign?
I am using SEOMOZ for a client to track their website's performance and fix any errors and issues. A few weeks ago, they created a sub-domain (sub.example.com) to create a niche website for some of their specialized content. However, when SEOMOZ re-crawled the main domain (example.com), it also reported the errors for the subdomain. Is there any way to stop SEOMOZ from crawling the subdomain and only crawl the main domain? I know that can be done by starting a new campaign, but is there any way to work around an existing campaign? I'm asking because we would like to avoid the setting up the campaign again and losing the historical data as well. Any input would be greatly appreciated. Thanks!
Moz Pro | | TheNorthernOffice790