Moz crawler is not able to crawl my website
-
Hello All,
I'm facing an issue with the MOZ Crawler. Every time it crawls my website , there will be an error message saying " **Moz was unable to crawl your site on Sep 13, 2017. **Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. "
We changed the robots.txt file and checked it . but still the issue is not resolved.
URL : https://www.khadination.shop/robots.txt
Do let me know what went wrong and wjhat needs to be done.
Any suggestion is appreciated.
Thank you.
-
Hi there! Tawny from Moz's Help Team here!
I think I can help you figure out what's going on with your robots.txt file. First things first: we're not starting at the robots.txt URL you list. Our crawler always starts from your Campaign URL and goes from there, and it can't start at an HTTPS URL, so it starts at the HTTP version and crawls from there. So, the robots.txt file we're having trouble accessing is khadination.shop/robots.txt.
I ran a couple of tests, and it looks like this robots.txt file might be inaccessible from AWS (Amazon Web Services). When I tried to curl your robots.txt file from AWS I got a 302 temporary redirect error (https://www.screencast.com/t/jy4MkDZQNbQ), and when I ran it through hurl.it, which also runs on AWS, it returned an internal server error (https://www.screencast.com/t/mawknIyaMn).
One more thing — it looks like you have a wildcard character ( * ) for the user-agent as the first line in this robots.txt file. Best practices indicate that you should put all your specific user-agent disallow commands before a wildcard user-agent; otherwise those specific crawlers will stop reading your robots.txt file after the wildcard user-agent line, since they'll assume that those rules apply to them.
I think if you fix up those things, we should be able to access your robots.txt and crawl your site!
If you still have questions or run into more trouble, shoot us a note at help@moz.com and we'll do everything we can to help you sort everything out.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Localized pages with hreflang markup being reported by Moz as duplicate content
We have 5 websites, each targeted at a different geography. They are all in English, but targeted per country (e.g. US, Canada, UK, Australia). And we've properly implemented hreflang tags on each page, with the US site being the x-default, and with each country-specific page having a self-referencing hreflang tag as well as individual hreflang tags pointing to the other country-specific versions. All seems to be working properly with the search engines. But, each week in our Moz Pro campaigns, Moz reports the pages as "duplicate content". It seems that Moz does not regard the hreflang tags when deciding if content is duplicate or not. I'm not 100% sure that's what's causing Moz to report them as duplicate, but it's my best guess. To date, I've been marking these as "ignore". But, that creates two problems. First, we have new pages all the time, and so this gets to be laborious. Second, it makes it somewhat likely we might miss a real duplicate content issue. Can someone confirm whether Moz should be looking at hreflang tags before considering pages as dupliacte? And possibly offer any suggestions to us if Moz doesn't do that?
Feature Requests | | seoelevated0 -
Is there a moz tool to optimize original advertorial content?
We are launching a new module that will involve paid advertorial along with unpaid articles. Does Moz have a tool that will help us with optimizing these articles for the English speaking Hong Kong market? We are not interested in optimizing for the US or other markets as people there would not be searching for the content we are producing so the optimization would not be very relevant. Example: https://hongkong.asiaxpat.com/other/263101/guide-to-buying-and-riding-a-motorcycle-in-hong-kong/
Feature Requests | | HKPaul0 -
Moz crawler is not able to crawl my website
Hello All, I'm facing an issue with the MOZ Crawler. Every time it crawls my website , there will be an error message saying " **Moz was unable to crawl your site on Sep 13, 2017. **Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. " We changed the robots.txt file and checked it . but still the issue is not resolved. URL : https://www.khadination.shop/robots.txt Do let me know what went wrong and wjhat needs to be done. Any suggestion is appreciated. Thank you.
Feature Requests | | Harini.M0 -
How to solve Moz crawl error?
I want to crawl one of my eCommerce website. In my google webmaster account, there is not crawl error. But when I want to check crawl by Moz it shows error. I have made this ecommerce site with prestashop. Google still rank up my loacal keywords. Can anybody suggest me by checking Moz crawler error by checking my online shop.
Feature Requests | | Shamimgypsy0 -
Crawl error : 804 https (SSL) error
Hi, I have a crawl error in my report : 804 : HTTPS (SSL) error encountered when requesting page. I check all pages and database to fix wrong url http->https but this one persist. Do you have any ideas, how can i fix it ? Thanks website : https://ilovemypopotin.fr/
Feature Requests | | Sitiodev0 -
Will the Moz Keyword Explorer Tool (which I LOVE BTW) some day allow you to see search result figures broken down by location?
I know this is a long shot, but are there plans to eventually filter keyword volumes by location like the Google AdWords tool does today? Or is that never going to happen? Assuming not, are there other recommended avenues for teasing out keyword volumes of a local vicinity? I work for a health system, so nearly all of our customers are local. So I always feel like I have to guess on words like "urgent care" since obviously I'm not trying to rank nationally, but often people do not use a geo modifier term when searching.
Feature Requests | | Patrick_at_Nebraska_Medicine0 -
Crawl diagnostic errors due to query string
I'm seeing a large amount of duplicate page titles, duplicate content, missing meta descriptions, etc. in my Crawl Diagnostics Report due to URLs' query strings. These pages already have canonical tags, but I know canonical tags aren't considered in MOZ's crawl diagnostic reports and therefore won't reduce the number of reported errors. Is there any way to configure MOZ to not consider query string variants as unique URLs? It's difficult to find a legitimate error among hundreds of these non-errors.
Feature Requests | | jmorehouse0 -
Can I add more then 3 competitors do my SEO moz pro account?
I have a client who will need more then the three competitors tracked(possibly up to 10) is there any way to do this with one campaign or will i need to start multiple campaigns to achieve this?
Feature Requests | | BBI_Brandboost8