Why my site not crawl?
-
my error in dashboard:
**Moz was unable to crawl your site on Jul 23, 2020. **Our crawler was banned by a page on your site, either through your robots.txt, the X-Robots-Tag HTTP header, or the meta robots tag. Update these tags to allow your page and the rest of your site to be crawled. If this error is found on any page on your site, it prevents our crawler (and some search engines) from crawling the rest of your site. Typically errors like this should be investigated and fixed by the site webmaster
i thing edit robots.txt
how fix that?
-
Hey - Thanks for posting.
One thing to note here is that Moz doesn't read sitemaps. We do check robots.txt files for directives, but then crawl starting at the campaign seed URL, and work our way down in depth.
If your site is still not being crawled, I would suggest reaching out to help@moz.com with the campaign in question so we can take a look.
thanks!
-
Well, do you have pages that have Noindex, nofollow? If so, please check and verify. 2nd: you can leave your robots.txt file pretty much empty except for a link to your sitemap, i.e
sitemap: https:// www. somesite.xyz /sitemap.xml
There's no need to block bots really unless specific pages need to be blocked. it wont stop bad crawlers either putting that into your robots.
-
me added in robots.txt below code:
User-agent: rogerbot
Disallow:
but not fixed and error in my dashboard
-
You could just put the sitemap location in your robots.txt, and call it a day. Blocking robots is'nt really a good thing to be honest unless you really have to.
-
thanks
with below code ? :
User-agent: rogerbot
Disallow:
-
Your blocking a crawler to access your page. Clear it.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Server blocking crawl bot due to DOS protection and MOZ Help team not responding
First of all has anyone else not received a response from the help team, ive sent 4 emails the oldest one is a month old, and one of our most used features on moz on demand crawl to find broken links doesnt work and its really frustrating to not get a response, when we're paying so much a month for a feature that doesnt work. Ok rant over now onto the actual issue, on our crawls we're just getting 429 errors because our server has a DOS protection and is blocking MOZ's robot, im sure it will be as easy as whitelisting the robots IP, but i cant get a response from MOZ with the IP. Cheers, Fergus
Feature Requests | | JamesDavison0 -
Can Moz add an alert to email us when a competitor's site gets a new backlink?
This would be a very useful feature, and other sites are doing this, including Ahrefs.
Feature Requests | | rabbit5190 -
Moz crawler is not able to crawl my website
Hello All, I'm facing an issue with the MOZ Crawler. Every time it crawls my website , there will be an error message saying " **Moz was unable to crawl your site on Sep 13, 2017. **Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. " We changed the robots.txt file and checked it . but still the issue is not resolved. URL : https://www.khadination.shop/robots.txt Do let me know what went wrong and wjhat needs to be done. Any suggestion is appreciated. Thank you.
Feature Requests | | Harini.M0 -
Moz crawler is not able to crawl my website
Hello All, I'm facing an issue with the MOZ Crawler. Every time it crawls my website , there will be an error message saying " **Moz was unable to crawl your site on Sep 13, 2017. **Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. " We changed the robots.txt file and checked it . but still the issue is not resolved. URL : https://www.khadination.shop/robots.txt Do let me know what went wrong and wjhat needs to be done. Any suggestion is appreciated. Thank you.
Feature Requests | | Harini.M0 -
MOZ Site Crawl - Ignore functionality question
Quick question about the ignore feature found in the MOZ Site Crawl. We've made some changes to pages containing errors found by the MOZ Site Crawl. These changes should have resolved issues but we're not sure about the "Ignore" feature and do not want to use it without first understanding what will happen when using it. Will it clear the item from the current list until the next Site Crawl takes place. If Roger finds the issue again, it will relist the error? Will it clear the item from the list permanently, regardless if it has not been properly corrected?
Feature Requests | | StickyLife1 -
Crawl test limitaton - ways to take advantage of large sites?
Hello I have a large site (120,000+) and crawl test is limited to 3,000 pages. I want to know if you have a way to take advantage to crawl a type of this sites. Can i do a regular expression for example? Thanks!
Feature Requests | | CamiRojasE0 -
Does Moz Pro provide a back link suggestion area for a particular site?
Is there a tool with Moz Pro that offers link suggestions for specific sites?
Feature Requests | | DenisZilberberg0 -
Crawl diagnostic errors due to query string
I'm seeing a large amount of duplicate page titles, duplicate content, missing meta descriptions, etc. in my Crawl Diagnostics Report due to URLs' query strings. These pages already have canonical tags, but I know canonical tags aren't considered in MOZ's crawl diagnostic reports and therefore won't reduce the number of reported errors. Is there any way to configure MOZ to not consider query string variants as unique URLs? It's difficult to find a legitimate error among hundreds of these non-errors.
Feature Requests | | jmorehouse0