Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
520 Error from crawl report with Cloudflare
-
I am getting a lot of 520 Server Error in crawl reports. I see this is related to Cloudflare. We know 520 is Cloudflare so maybe the Moz team can change this from "unknown" to "Cloudflare 520". Perhaps the Moz team can update the "how to fix" section in the reporting, if they have some possible suggestions on how to avoid seeing these in the report of if there is a real issue that needs to be addressed. At this point I don't know.
There must be a solution that Moz can provide like a setting in Cloudflare that will permit the Rogerbot if Cloudflare is blocking it because it does not like its behavior or something.
It could be that Rogerbot is crawling my site on a bad day or at a time when we were deploying a massive site change. If I know when my site will be down can I pause Rogerbot?
I found this https://developers.cloudflare.com/support/troubleshooting/general-troubleshooting/troubleshooting-crawl-errors/
-
A 520 error is an HTTP error code that indicates that Cloudflare was unable to establish a connection to the origin server. This can happen for a variety of reasons, including:
Server downtime: The origin server might be down or undergoing maintenance.
Firewall restrictions: The origin server might have a firewall that is blocking requests from Cloudflare.
DNS issues: There might be a DNS misconfiguration that is preventing Cloudflare from resolving the origin server's IP address.
SSL issues: There might be an issue with the SSL certificate on the origin server.
To troubleshoot the issue, you can try the following:
Check if the origin server is up and running.
Check if the origin server has a firewall that is blocking requests from Cloudflare.
Check if the DNS is configured correctly.
Check if the SSL certificate is valid and configured correctly.
If none of these steps resolve the issue, you can reach out to Cloudflare support for further assistance.
-
@awilliams_kingston To answer your question, there is no option to pause Rogerbot manually. However, Rogerbot only crawls a website when a Site Crawl campaign is active and scheduled to run. If you want to pause Rogerbot, you can stop the active campaign or schedule the next crawl to start at a later time.
To schedule a Site Crawl, go to your Moz Pro account, click on "Site Crawl" in the left-hand navigation menu, and select "Add Campaign" to set up a new campaign or select an existing one. From there, you can customize your crawl settings, including the crawl frequency and start time.
If you have a scheduled maintenance window and want to prevent Rogerbot from crawling your site during that time, you can adjust the crawl frequency to avoid overlapping with your maintenance schedule. You can also use a robots.txt file to block the crawler from accessing specific pages or sections of your site.
-
@awilliams_kingston The 520 server error you're seeing in your Moz crawl reports is related to Cloudflare. It's a generic error, which means it could be caused by a variety of issues, including server overload or misconfigured settings.
To address this, you could check your Cloudflare firewall settings and see if there are any rules that are blocking the Moz Rogerbot crawler. If there are, try adding an exception for the Rogerbot user agent to allow it to crawl your site without being blocked.
If you know your site will be down for maintenance or undergoing significant changes, you could pause the Moz crawler during that time to prevent it from generating false 520 errors in your reports.
Finally, you could check out the troubleshooting guide in the Cloudflare documentation for more information on identifying and addressing crawl errors. Remember to work with both Moz and Cloudflare support teams to find a solution that works for your specific setup.
-
@Kateparish Thank you.
How do you pause Rogerbot? I can't find anything on that in my admin panel but maybe it is because there is no crawl happening at the moment and my next crawl is scheduled to happen in a few days. Also, is there a way to schedule a pause if a crawl is happening? If I know I have site maintenance on a certain day of the week a specific time, for example, I can have Rogerbot take a break? -
A 520 error typically indicates a connection error between Cloudflare and the origin server. This error occurs when the server returns an empty or invalid response to Cloudflare, or when the server takes too long to respond.
To troubleshoot a 520 error from a crawl report with Cloudflare, you can take the following steps:
Check the server logs: The first step in troubleshooting a 520 error is to check the server logs for any error messages. Look for any errors related to the server's network or connectivity, such as DNS resolution issues, network timeouts, or firewall restrictions.
Check Cloudflare logs: Cloudflare logs can provide additional insights into the cause of the error. Check the Cloudflare logs for any error messages or connection issues between Cloudflare and the origin server.
Temporarily disable Cloudflare: Temporarily disabling Cloudflare can help you determine if the error is caused by Cloudflare or the origin server. If the error disappears when Cloudflare is disabled, then the issue is likely with Cloudflare.
Contact Cloudflare support: If you are unable to resolve the issue on your own, you can contact Cloudflare support for assistance. Provide them with the server logs and Cloudflare logs, as well as any other relevant information, to help them diagnose the issue.
By following these steps, you should be able to identify and resolve the 520 error from the crawl report with Cloudflare.
-
@awilliams_kingston The 520 server error you're seeing in your Moz crawl reports is related to Cloudflare. It's a generic error, which means it could be caused by a variety of issues, including server overload or misconfigured settings.
To address this, you could check your Cloudflare firewall settings and see if there are any rules that are blocking the Moz Rogerbot crawler. If there are, try adding an exception for the Rogerbot user agent to allow it to crawl your site without being blocked.
If you know your site will be down for maintenance or undergoing significant changes, you could pause the Moz crawler during that time to prevent it from generating false 520 errors in your reports.
Finally, you could check out the troubleshooting guide in the Cloudflare documentation for more information on identifying and addressing crawl errors. Remember to work with both Moz and Cloudflare support teams to find a solution that works for your specific setup.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to get rid of bot verification errors
I have a client who sells highly technical products and has lots and lots (a couple of hundred) pdf datasheets that can be downloaded from their website. But in order to download a datasheet, a user has to register on the site. Once they are registered, they can download whatever they want (I know this isn't a good idea but this wasn't set up by us and is historical). On doing a Moz crawl of the site, it came up with a couple of hundred 401 errors. When I investigated, they are all pages where there is a button to click through to get one of these downloads. The Moz error report calls the error "Bot verification". My questions are:
Technical SEO | Sep 9, 2021, 10:12 PM | mfrgolfgti
Are these really errors?
If so, what can I do to fix them?
If not, can I just tell Moz to ignore them or will this cause bigger problems?0 -
How to allow bots to crawl all but WP-content
Hello, I would like my website to remain crawlable to bots, but to block my wp content and media. Does the following robots.txt work? I worry that the * user agent may conflict with the others. User-agent: *
Technical SEO | Oct 17, 2018, 5:19 PM | Tom3_15
Disallow: /wp-admin/
Disallow: /wp-includes/
Disallow: /wp-content/ User-agent: GoogleBot
Allow: / User-agent: GoogleBot-Mobile
Allow: / User-agent: GoogleBot-Image
Allow: / User-agent: Bingbot
Allow: / User-agent: Slurp
Allow: /0 -
Will Google crawl and rank our ReactJS website content?
We have 250+ products dynamically inserted and sorted on our site daily (more specifically our homepage... yes, it's a long page). Our dev team would like to explore rendering the page server-side using ReactJS. We currently use a CDN to cache all the content, which of course we would like to continue using. SO... will Google be able to crawl that content? We've read some articles with different ideas (including prerendering): http://andrewhfarmer.com/react-seo/
Technical SEO | May 24, 2016, 8:33 AM | Jane.com
http://www.seoskeptic.com/json-ld-big-day-at-google/ If we were to only load the schema important to the page (like product title, image, price, description, etc.) from the server and then let the client render the remaining content (comments, suggested products, etc.), would that go against best practices? It seems like that might be seen as showing the googlebot 1 version and showing the site visitor a different (more complete) version.0 -
Crawl rate dropped to zero
Hello, I recently moved my site in godaddy from cpanel to managed wordpress. I bought this transfer directly from GoDaddy customer service. in this process they accidentally changed my domain from www to non www. I changed it back after the migration, but as a result of this sites craw rate from search console fell to zero and has not risen at all since then. In addition to this website does not display any other errors, i can ask google manually fetch my pages and it works as before, only the crawl rates seems to be dropped permanently. GoDaddy customer service also claims that do not see any errors but I think, however, that in some way they caused this during the migration when the url changed since the timing match perfectly. also when they accidentally removed the www, crawl rate of my sites non www version got up but fell back to zero when I changed it back to www version. Now the crawl rate of both www and non www version is zero. How do I get it to rise again? Customer service also said that the problem may be related to ftp-data of search console? But they were not able to help any more than .Would someone from here be able to help me with this in anyway please?
Technical SEO | Apr 2, 2016, 8:41 AM | pok3rplay3r0 -
Yoast SEO. After set up 404 error pages
Hello all, Something strange happened with my blog site. I recently signed to MOZ tools. Initially everything was fine, but during my last crawl I got loads of 404
Technical SEO | Feb 14, 2016, 3:53 PM | A_Fotografy
pages. Few days ago I was tweaking some settings in SEO plugin according to this post https://moz.com/blog/setup-wordpress-for-seo-success What I noticed was that 404 pages were coming from my blog posts, but for
some reason category was missing in those posts. For example this link is 404
https://a-fotografy.co.uk/inchcolm-island-wedding-photography-bailie The one with category is https://a-fotografy.co.uk/wedding-pictures/inchcolm-island-wedding-photography-bailie/ So basically for some reason category was missing. Please let me know how can I fix this instead of doing hundreds of
redirects now. Thank you,
Regards,
Armands0 -
Schema Markup Errors - Priority or Not?
Greetings All... I've been digging through the search console on a few of my sites and I've been noticing quite a few structured data errors. Most of the errors are related to: hcard, hentry and hatom. Most of them are missing author & entry-title, while the other one is missing: fn. I recently saw an article on SEL about Google's focus on spammy mark-up. The sites I use are built and managed by vendors, so I would have to impress upon them the impact of these errors and have them prioritize, then fix them. My question is whether or not this should be prioritized? Should I have them correct these errors sooner than later or can I take a phased approach? I haven't noticed any loss in traffic or anything like that, I'm more focused on what negative impact a "phased approach" could have. Any thoughts?
Technical SEO | Jul 9, 2015, 11:58 AM | AfroSEO0 -
429 Errors?
I have over 500,000 429 errors in webmaster tools. Do I need to be concerned about these errors?
Technical SEO | Nov 19, 2014, 1:08 PM | TheKrazyCouponLady0 -
How to stop Search Bot from crawling through a submit button
On our website http://www.thefutureminders.com/, we have three form fields that have three pull downs for Month, Day, and year. This is creating duplicate pages while indexing. How do we tell the search Bot to index the page but not crawl through the submit button? Thanks Naren
Technical SEO | Dec 12, 2011, 2:44 PM | NarenBansal0