612 : Page banned by error response for robots.txt
-
Hi all,
I ran a crawl on my site https://www.drbillsukala.com.au and received the following error "612 : Page banned by error response for robots.txt."Before anyone mentions it, yes, I have been through all the other threads but they did not help me resolve this issue.
I am able to view my robots.txt file in a browser https://www.drbillsukala.com.au/robots.txt.
The permissions are set to 644 on the robots.txt file so it should be accessible
My Google Search Console does not show any issues with my robots.txt file
I am running my site through StackPath CDN but I'm not inclined to think that's the culpritOne thing I did find odd is that even though I put in my website with https protocol (I double checked), on the Moz spreadsheet it listed my site with http protocol.
I'd welcome any feedback you might have. Thanks in advance for your help.
Kind regards -
Hey there! Tawny from Moz's Help Team here.
After doing some quick searching, it looks like how you configure the rules for WAFs depends on what service you're using to host those firewalls. You may need to speak to their support team to ask how to configure things to allow our user-agents.
Sorry I can't be more help here! If you still have questions we can help with, feel free to reach out to us at help@moz.com and we'll do our best to assist you.
-
Hi, I am having the same issue.
Can you please tell me how you have created rule in Web Application Firewall to allow user agents rogerbot and dotbot.
Thanks!!
-
Hi Federico,
Thanks for the prompt. Yes, this solution worked. I'm hopeful that this thread helps others too because when I was troubleshooting the problem, the other threads were not helpful for my particular situation.
Cheers
-
Hi, did the solution work?
-
Hi Federico,
I think I have found the solution for this problem and am hopeful the crawl will be successful this time around. Based on further digging and speaking to the team at StackPath CDN, I have done the following:
- I added the following to my robots.txt file
User-agent: rogerbot
Disallow:User-agent: dotbot
Disallow:- I added a custom robots.txt file in my CDN which includes the above and then created a rule in my Web Application Firewall which allows user agents rogerbot and dotbot.
I'll let you know if the crawl was successful or not.
Kind regards
-
Thanks for your response Federico. I have checked my robots.txt tester in my Google Search Console and it said "allowed."
Oddly, it also happened on another site of mine that I'm also running through StackPath CDN with a web application firewall in place. This makes me wonder if perhaps the CDN/WAF are the culprits (?).
I'll keep poking around to see what I find.
Cheers -
Seems like an issue with the Moz crawler, as the robots.txt has no issues and the site loads just fine.
If you already tested your robots.txt using the Google Webmaster Tools "robots.txt Tester" just to be sure, then you should contact Moz here: https://moz.com/help/contact/pro
Hope it helps.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I couldn't check Page authority or Domain Authority for any page, what can be reason?
I am trying to check DA and PA for page but not able to do this. Tried it before two days and it was working fine, problem occurring today. I want help regarding this issue, is anyone other facing this problem. Kindly help me out (provide some other way to check DA/PA).
Link Explorer | | JessicaThompson1 -
Page Authority higher on my old (redirected) domain than new domain?
Hi everyone. I moved the domain my blog was on about 18 months ago, and it's ranking in Google never recovered. I've noticed in the 'Inbound Links' tool that my old domain (jamescrowley.co.uk) shows a page authority of 26, while my new domain (jamescrowley.net) shows a page authority of 15. Any idea why that would be? 301 redirects have been in place for the whole 18 months, so I'd have thought the authority would have been 'passed on' by now? Many thanks James
Link Explorer | | james.crowley0 -
Strange error in MOZ report
I get the following warning about our domain name in Link Explorer Moz tool You entered the URL debtacademy.com which redirects to www.hugedomains.com/domain_profile.cfm?d=debtacademy&e=com. Click here to analyze www.hugedomains.com/domain_profile.cfm?d=debtacademy&e=com instead. Please advice me. How I can fix it.
Link Explorer | | jeffreyjohnson0 -
Https in Top Pages Report
Our site migrated to https a year ago but when I ran the Top Pages report, it's listing only http URLs. 1. Why is it only listing http URLs? 2. I'm interested to see which ones 301 redirects to https and which are 200. How can I do this? Thanks!
Link Explorer | | CND0 -
Custom ranking report based on pages
Is it possible to generate a report that shows the keywords and position per page? For example Page xyc ranks for keyword1 - position 5, keyword2 - position 32, keyword3 - position9. I don't want to input the keywords, I want to be told via the report. I am trying to find out what SEO weight certain pages/sections of the website offer and if its OK to pull the content to change it into something else.
Link Explorer | | differentmoney0 -
Sufficient Words in Content error, despite having more than 300 words
My client has just moved to a new website, and I receive "Sufficient Words in Content" error over all website pages , although there are much more than 300 words in those pages. for example: https://www.assuta.co.il/category/assuta_sperm_bank/ https://www.assuta.co.il/category/international_bank_sperm_donor/ I also see warnings for "Exact Keyword Used in Document at Least Once" although there is use of them in the pages.. The question is why can't MOZ crawler see the pages contents?
Link Explorer | | michalos12210 -
Incorrect crawl errors
A crawl of my websites has indicated that there are some 5XX server errors on my website: Error Code 608: Page not Decodable as Specified Content Encoding
Link Explorer | | LiamMcArthur
Error Code 803: Incomplete HTTP Response Received
Error Code 803: Incomplete HTTP Response Received
Error Code 608: Page not Decodable as Specified Content Encoding
Error Code 902: Network Errors Prevented Crawler from Contacting Server The five pages in question are all in fact perfectly working pages and are returning HTTP 200 codes. Is this a problem with the Moz crawler?1 -
How often are Moz Domain authority and Page Authority Metrics Refreshed?
How often are the MOZ metrics of DA and PA refreshed, this is because i have noticed when i conduct an Inbound link analysis of my blog, i realize that MOZ is merely detecting around 100+ only links to what is being detected on Google Webmaster Tools, any advise?
Link Explorer | | ConnectMedia0