Crawl Errors and Notices drop to zero
-
Hi all,
After setting up a campaign in Moz the crawl is successful and it showed the Errors and Warnings in crawl diagnostics (each one had about 40-50), but after a few days the number dropped to zero. Only the "notices" seems to stay normal, with a slight drop since the campaign set up, but not dropping to zero. I set this campaign up in a colleague's account and the same thing happened shortly after set up. I didn't find any Q&A already posted so any insight is appreciated!
-
Glad I could help!
-
Thank you for looking into this. Really appreciate it!
-
Hey Vanessa,
Every URL that is in the report you forwarded is on the blog, so it definitely looks like the noindex tag on the blog is the reason for the drop in crawl errors and warnings. If you prefer that we begin crawling the blog again, you can have that tag removed, but the tag also means that the search engines aren't indexing those pages or finding those errors any longer either.
Let me know if you have any other questions.
Chiaryn
-
Thanks for looking into this, Chiaryn. That is the correct campaign. I have a report from April 17 and I sent it to help@seomoz.org. If you can shed any light on this that would be a big help. I appreciate it!
-
Hey Vanessa,
Thanks for writing in.
I looked into your account and I think you are referring to the Sparky campaign. Unfortunately, I can only see the most recent crawl data, so I don't have a way to compare the crawls from prior to April 24th to see why the number of errors and warnings would have dropped off around that time.
I do see that we picked up a noindex, nofollow tag on the blog pages on April 16th, so it may be that we were crawling other pages on the blog that had errors and warnings before the tag was added. But once the noindex, nofollow tags were added, we weren't able to crawl those pages and report back on the errors.
If you can think of any other changes that may have taken place around April or if you have an old report that shows some of the URLs that were reported as having errors, I can look into this further for you. If you prefer not to include the error report on this public forum, you can always email it to help@seomoz.org and include my name in the subject line.
I hope this helps.
Chiaryn
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Site Crawl Stalled and Can't Restart
In my GreenSeed campaign, the site crawl continues to say "in progress." I can't figure out how to stop it or how to restart the site crawl. Can you please help?
Moz Pro | | Winger1 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Ignore Moz Notices?
Hi All, I read an old post (2011) about ignoring rel canonical notices from Moz. I wondered why they can be ignored? The thing is, it's showing a bunch of rel canonical errors for my pages but I know I have the tag right. At the risk of being offensive.. can the notices be ignored because the Moz data is erroneous? I'm not sure what to make of this. Thanks!
Moz Pro | | xvpn9020 -
Keyword analysis - error with difficulty %
Hello everyone! Edge of all, I like and I use very often the "Keyword analysis" tool . However, recently, I have some difficulty to rely on the difficulty % of the keywords I've added. As an example I entered as keywords for a french speaking client, words with a geographical reference. It gives me a percentage too high because I am almost convinced that it would probably not be difficult to work with. My question is: is there any known errors with keywords georeferenced French (google canada - english) and also what to trust when I want proof of the difficulty of a keyword in case of some errors may occur in the tool.
Moz Pro | | PTech-1885830 -
Crawl credits how to buy more?
Just wondering if there is a way of increasing, my 2 crawl credits per day limit?
Moz Pro | | aussieseoguy0 -
Where do these error 404 pages come from
Hi, I've got a list of about 12 url's in our 404 section on here which I'm confused about. The url's relate to Christmas so they have not been active for 9 months. Can anyone answer where the SeoMoz crawler found these url's as they are not linked to on the website. Thanks
Moz Pro | | SimmoSimmo0 -
Crawl Diagnostic Errors
Hi there, Seeing a large number of errors in the SEOMOZ Pro crawl results. The 404 errors are for pages that look like this: http://www.example.com/2010/07/blogpost/http:%2F%2Fwww.example.com%2F2010%2F07%2Fblogpost%2F I know that t%2F represents the two slashes, but I'm not sure why these addresses are being crawled. The site is a wordpress site. Anyone seen anything like this?
Moz Pro | | rosstaylor0 -
2nd Crawl taking too long?
Hi, I've added a campaign to my account with the first crawl taking around a week. The 2nd crawl started 3days 17 hours ago and si still running. Is this something that others have experienced? The campaign is tracking 5 keywords and have 17 pages on the site. Steve
Moz Pro | | stevecounsell0