Moz crawler is not able to crawl my website
-
Hi, i need help regarding Moz Can't Crawl Your Site i also share screenshot that Moz was unable to crawl your site on Mar 26, 2022. Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster.
my robts.txt also ok i checked it
Here is my website https://whiskcreative.com.au
just check it please as soon as possibe -
@jasontorney Hi, I had the same problem after moving several of my websites to a Virtual Private Server that had enhanced security features.
One of these features was specifically to stop the Moz bot from crawling websites, and the hosting engineers advised they had done this because it was particularly aggressive in nature.
With my VPS control panel I have found the switch that allows me to disable bot blocking, and I occasionally do this if I'm grading a page with Moz, but advice from hosting support was to otherwise leave it active to protect the websites from attached (which means I don't get feedback from Moz crawls).
If you check with your hosting company you may find that they have a similar bot blocker configured for security purposes.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Moz is showing a canonical error that dont belong.
Hi guys, and thanks for this excellent source of information. i have an issue with the moz system because is telling to me that i dont have canonical instructions but i have canonical instructions on all my pages, so... im confused because maybe im not understanding what the system want to show to me. if you can help me i will be very gratefull. here you can see a page that have the canonical instruction. https://drive.google.com/file/d/14U_-Sgu_NQaB7kMBH3AguHQMHyHX9L8X/view?usp=sharing and here you can see what is reporting to me the MOZ system. https://drive.google.com/file/d/1pqgSC-V9WOyBPvQEr06pbqpLf_w7-q8J/view?usp=sharing this is happening on 19 pages, and all the 19 pages have the canonical instruction.
On-Page Optimization | | b-lab
thanks in advance guys.0 -
Unsolved MOZ Crawler Stalled
MOZ crawl has been waiting since April 2nd. Whats the deal? I am not able to "recrawl", so i have been stuck in limbo.
Product Support | | WebMarkets0 -
Unsolved Why is'nt there an update yet!
0 -
No index and Crawl Budget
Hello, If we noindex pages, will it improve crawl budget ? For example pages like these - https://x-z.com/2012/10/
Technical SEO | | Johnroger
https://x-y.com/2012/06/
https://x-y.com/2013/03/
https://x-y.com/2019/10/
https://x-y.com/2019/08/ Should we delete/redirect such pages ? Thanks0 -
A crawl revealed two home pages
After doing a site crawl using the moz tool, I have found two home pages-www.domain.com/ and www.domain.com. Both URLS have the exact same metrics and I have set a preferred domain name in google, will this hurt seo? Should I claim the www.domain.com/ as well as www.domain.com and domain.com in the search console? Thanks
Technical SEO | | Tom3_150 -
Google not crawling the website from 22nd October
Hi, This is Suresh. I made changes to my website and I see that google is unable to crawl my website from 22nd October. Even it is not showing any content when I use Cache:www.vonexpy.com. Can any body help me in knowing why Google is unable to crawl my website. Is there any technical issue with the website? Website is www.vonexpy.com Thanks in advance.
Technical SEO | | sureshchowdary1 -
Sitemap international websites
Hey Mozzers,Here is the case that I would appreciate your reply for: I will build a sitemap for .com domain which has multiple domains for other countries (like Italy, Germany etc.). The question is can I put the hreflang annotations in sitemap1 only and have a sitemap 2 with all URLs for EN/default version of the website .COM. Then put 2 sitemaps in a sitemap index. The issue is that there are pages that go away quickly (like in 1-2 days), they are localised, but I prefer not to give annotations for them, I want to keep clear lang annotations in sitemap 1. In this way, I will replace only sitemap 2 and keep sitemap 1 intact. Would it work? Or I better put everything in one sitemap?The second question is whether you recommend to do the same exercise for all subdomains and other domains? I have read much on the topic, but not sure whether it worth the effort.The third question is if I have www.example.it and it.example.com, should I include both in my sitemap with hreflang annotations (the sitemap on www.example.com) and put there it for subdomain and it-it for the .it domain (to specify lang and lang + country).Thanks a lot for your time and have a great day,Ani
Technical SEO | | SBTech0 -
Removing indexed website
I had a .in TLD version of my .com website floated for about 15 days, which was a duplicate copy of .com website. I did not wish to use the .in further for SEO duplication reasons and had let the .in domain expire on 26th April. But still now when I search from my website the .in version also shows up in results and even in google webmaster it shows the the website with maximum (190) number of links to my .com website. I am sure this is hurting the ranking of my .com website. How can the .in website be removed from googles indexing and search results. Given that is has expired also. thanks
Technical SEO | | geekwik0