Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Moz & Xenu Link Sleuth unable to crawl a website (403 error)
-
It could be that I am missing something really obvious however we are getting the following error when we try to use the Moz tool on a client website. (I have read through a few posts on 403 errors but none that appear to be the same problem as this)
Moz Result
Title 403 : Error
Meta Description 403 Forbidden
Meta Robots_Not present/empty_
Meta Refresh_Not present/empty_
Xenu Link Sleuth Result
Broken links, ordered by link:
error code: 403 (forbidden request), linked from page(s): Thanks in advance!
-
Hey Liam,
Thanks for following up. Unfortunately, we use thousands of dynamic IPs through Amazon Web Services to run our crawler and the IP would change from crawl to crawl. We don't even have a set range for the IPs we use through AWS.
As for throttling, we don't have a set throttle. We try to space out the server hits enough to not bring down the server, but then hit the server as often as necessary in order to crawl the full site or crawl limit in a reasonable amount of time. We try to find a balance between hitting the site too hard and having extremely long crawl times. If the devs are worried about how often we hit the server, they can add a crawl delay of 10 to the robots.txt to throttle the crawler. We will respect that delay.
If the devs use Moz, as well, they would also be getting a 403 on their crawl because the server is blocking our user agent specifically. The server would give the same status code regardless of who has set up the campaign.
I'm sorry this information isn't more specific. Please let me know if you need any other assistance.
Chiaryn
-
Hi Chiaryn
The sage continues....this is the response my client got back from the developers - please could you let me have the answers to the two questions?
Apparently as part of their ‘SAF’ (?) protocols, if the IT director sees a big spike in 3<sup>rd</sup> party products trawling the site he will block them! They did say that they use moz too. What they’ve asked me to get from moz is:
- Moz IP address/range
- Level of throttling they will use
I would question that if THEY USE MOZ themselves why would they need these answers but if I go back with that I will be going around in circles - any chance of letting me know the answer(s)?
Thanks in advance.
Liam
-
Awesome - thank you.
Kind Regards
Liam
-
Hey There,
The robots.txt shouldn't really affect 403s; you would actually get a "blocked by robots.txt" error if that was the cause. Your server is basically telling us that we are not authorized to access your site. I agree with Mat that we are most likely being blocked in the htaccess file. It may be that your server is flagging our crawler and Xenu's crawler as troll crawlers or something along those lines. I ran a test on your URL using a non-existent crawler, Rogerbot with a capital R, and got a 200 status code back but when I run the test with our real crawler, rogerbot with a lowercase r, I get the 403 error (http://screencast.com/t/Sv9cozvY2f01). This tells me that the server is specifically blocking our crawler, but not all crawlers in general.
I hope this helps. Let me know if you have any other questions.
Chiaryn
Help Team Ninja -
Hi Mat
Thanks for the reply - robots.txt file is as follows:
## The following are infinitely deep trees User-agent: * Disallow: /cgi-bin Disallow: /cms/events Disallow: /cms/latest Disallow: /cms/cookieprivacy Disallow: /cms/help Disallow: /site/services/megamenu/ Disallow: /site/mobile/ I can't get access to the .htaccess file at present (we're not the developers) Anyone else any thoughts? Weirdly I can get Screaming Frog info back on the site :-/
-
403s are tricky to diagnose because they, by their very nature, don't tell you much. They're sort of the server equivalent of just shouting "NO!".
You say Moz & Xenu are receiving the 403. I assume that it loads properly from a browser.
I'd start looking at the .htaccess . Any odd deny statements in there? It could be that an IP range or user agent is blocked. Some people like to block common crawlers (Not calling Roger names there). Check the robots.txt whilst you are there, although that shouldn't return a 403 really.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
MOZ Versus HUBSPOT??? If We Discontinue MOZ Subscription and Get HUBSPOT, Anything Lost?
How does Hubspot compare with MOZ? Doe it provide similar features or is the functionality very different? Is Hubspot complementary to MOZ or could it be used as a substitute? If we stopped our MOZ subscription and subscribed to Hubspot would we lose anything? Thanks, Alan
Moz Pro | | Kingalan10 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
Meta Tag Descriptions not being found in Moz Crawls
Hey guys, I have been managing a few websites and have input them into Moz for crawl reports, etc. For a while I have noticed that we were getting a gratuitous amount of errors when it came to the number of missing meta tags. It was numbering in the 200's. The sites were in place before I got here and a lot of the older posts no one had even attempted to include tags, links of the page or anything. As they are all Wordpress Sites and they all already had the Yoast/Wordpress SEO plug-in installed on them, I decided I would go through each post and media file one at a time and update their meta tags via the plug in. I personally did this so I know that I added and saved each one, however the Moz crawl reports continue to show that we are missing roughly 200 meta tags. I've seen a huge drop off in 404 errors and stuff since I went through and double checked everything on the sites, however the meta tag errors persist. Is this the case that Moz is not recognizing the tags when it crawls because I used the Yoast Plugin? Or would you say that the plugin is the issue and I should find another way to add meta tags to the pages and posts on the site? My main concern is that if Moz is having issues crawling the sites, is Google also seeing the same thing? The URLS include:
Moz Pro | | MOZ.info
sundancevacationsblog.com
sundancevacationsnews.com
sundancevacationscharities.com Any help would be appreciated!0 -
Moz WordPress Plugin?
WordPress is currently 18% of the Internet. Given its huge footprint, wouldn't it make sense for Moz to develop a WP plugin that can not only report site metrics, but help fix and optimize site structure directly from within the site? Just curious - I can't be the only one who wonders if I'm implementing Moz findings/recommendations correctly given the myriad of WP SEO plugins, authors, implementations.
Moz Pro | | twelvetwo.net4 -
Why are inbound links not showing?
I run the site http://www.eurocheapo.com and am finding that many inbound links are not showing up in OSE and on the toolbar. For example, check out this hotel review: http://www.eurocheapo.com/paris/hotel/hotel-esmeralda.html In OSE it shows only 2 links (from 1 domain), which is crazy. It has dozens of inbound links from many different domains (links:http://www.eurocheapo.com/paris/hotel/hotel-esmeralda.html). I notice this all over my site. Pages that we link between are also showing no internal links -- which is easy to disprove. Was there a problem with this crawl? Or is the problem in our code? Many thanks for your help, Tom
Moz Pro | | TomNYC0 -
Error 403
I'm getting this message "We were unable to grade that page. We received a response code of 403. URL content not parseable" when using the On-Page Report Card. Does anyone know how to go about fixing this? I feel like I've tried everything.
Moz Pro | | Sean_McDonnell0