Crawl Diagnostics 403 on home page...
-
In the crawl diagnostics it says oursite.com/ has a 403. doesn't say what's causing it but mentions no robots.txt. There is a robots.txt and I see no problems. How can I find out more information about this error?
-
Hi Dana,
Thanks for writing in. The robots.txt file would not cause a 403 error. That type of error is actually related to the way the server responds to our crawler. Basically, this means the server for the site is telling our crawler that we are not allowed to access the site. Here is a resource that explains the 403 http status code pretty thoroughly: http://pcsupport.about.com/od/findbyerrormessage/a/403error.htm
I looked at both of the campaigns on your account and I am not seeing a 403 error for either site, though I do see a couple of 404 page not found errors on one of the campaigns, which is a different issue.
If you are still seeing the 403 error message on one of your crawls, you would just need to have the webmaster update the server to allow rogerbot to access the site.
I hope this helps. Please let me know if you have any other questions.
-Chiaryn
-
Okay, so I couldn't find this thread and started a new one. Sorry...
... The problem persists.
RECAP
I have two blocks in my htaccess both are for amazonaws.com.
I have gone over our server block logs and see only amazon addresses and bot names.
I did a fetch as google with our WM Tools and fetch it did. Success!
Why isn't thiscrawler able to access? Many other bots are crawling right now.
Why can I use the seomoz on-page feature to crawl a single page but the automatic crawler wont access the site? Just took a break from typing this to try the on-page on our robots.txt, worked fine. Use the keyword "Disallow" and it gave me a C. =0)
... now if we could just crawl the rest of the site...
any help on this would be greatly appreciated.
-
I think I do. I just (a few minutes ago) went through a 403 problem being reported by another site trying access an html file for verification. Apparently they are connecting with an ip that's blocked by our htaccess. I removed the blocks told them to try again and it worked no problem. I see that SEOMoz has only crawled 1 page. Off to see if I can trigger a re-crawl now...
-
hmmm... not sure why this is happening. maybe add this line to the top of your robots.txt and see if it fixes by next week. it certainly won't hurt anything:
User-agent: * Allow: /
-
No problem. Looking at my Google WM Tools , crawl stats don't show any errors.
Thanks
User-Agent: *
Disallow: /*?zenid=
Disallow: /editors/
Disallow: /email/
Disallow: /googlecheckout/
Disallow: /includes/
Disallow: /js/
Disallow: /manuals/ -
OH this is only in SEOmoz's crawl diagnostics that you're seeing this error. That explains why robots.txt could be affecting it. I misread this earlier and thought you were finding the 403 on your own in-browser.
Can you paste the robots.txt file into here so we can see it? I would imagine that has everything to do with it now that I've correctly read your post --my apologies
-
apache
-
a 403 is a Forbidden code usually pertaining to Security and Permissions.
Are you running your server in an Apache or IIS environment? Robots.txt shouldn't affect a site's visibility to the public it only talks to site crawlers.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
404 Crawl Diagnostics with void(0) appended to URL
Hello I am getting loads of 404 reported in my Crawl report, all appended with void(0) at the end. For example: http://lfs.org.uk/films-and-filmmakers/watch-our-films/1289/void(0)
Moz Pro | | moshen
The site is running on Drupal 7, Has anyone come across this before? Kind Regards Moshe | http://lfs.org.uk/films-and-filmmakers/watch-our-films/1289/void(0) |0 -
Crawl Diagnostics: Next crawl date is in the past
Hi - I have quite a few crawl diagnostic errors and warnings. I have attempted to fix many of them but noticed this note at the bottom of the crawl diagnostics chart: "Last Crawl Completed: Mar. 22nd, 2013 Next Crawl Starts: Mar. 29th, 2013" It looks like SEOMoz thinks the next crawl date is Mar 29th, 2013, which is two weeks ago. Is there any way to "force" the crawl and get it back on regular schedule? This may have happened when my account was disabled because my credit card expired...Thoughts?
Moz Pro | | 6thirty0 -
Still Cant Crawl My Site
I've removed all blocks but two from our htaccess. They are for amazonaws.com to block amazon from crawling us. I did a fetch as google in our WM tools on our robots txt with success. SEOMoz crawler here hit's our site and gets a 403. I've looks in our blocked request logs and amazon is the only one in there. What is going on here?
Moz Pro | | martJ0 -
Too Many On-Page Links: Crawl Diag vs On-Page
I've got a site I'm optimizing that has thousands of 'too many links on-page' warnings from the SeoMoz crawl diagnostic. I've been in there and realized that there are indeed, the rent is too damned high, and it's due to a header/left/footer category menu that's repeating itself. So I changed these links to NoFollow, cutting my total links by about 50 per page. I was too impatient to wait for a new crawl, so I used the On Page Reports to see if anything would come up on the Internal Link Count/External Link Count factors, and nothing did. However, the crawl (eventually) came back with the same warning. I looked at the link Count in the crawl details, and realized that it's basically counting every single '<a href'="" on="" the="" page.="" because="" of="" this,="" i="" guess="" my="" questions="" are="" twofold:<="" p=""></a> <a href'="" on="" the="" page.="" because="" of="" this,="" i="" guess="" my="" questions="" are="" twofold:<="" p="">1. Is no-follow a valid strategy to reduce link count for a page? (Obviously not for SeoMoz crawler, but for Google)</a> <a href'="" on="" the="" page.="" because="" of="" this,="" i="" guess="" my="" questions="" are="" twofold:<="" p="">2. What metric does the On-Page Report use to determine if there are too many Internal/External links? Apologies if this has been asked, the search didn't seem to come up with anything specific to this.</a>
Moz Pro | | icecarats0 -
Crawl Errors Confusing Me
The SEOMoz crawl tool is telling me that I have a slew of crawl errors on the blog of one domain. All are related to the MSNbot. And related to trackbacks (which we do want to block, right?) and attachments (makes sense to block those, too) ... any idea why these are crawl issues with MSNbot and not Google? My robots.txt is here: http://www.wevegotthekeys.com/robots.txt. Thanks, MJ
Moz Pro | | mjtaylor0 -
Crawling One Page
I set up a profile for a site with many pages, opting for setting up as a root directory. When SEOMoz crawled, they only found one page. Any ideas for why this would be? Thanks!
Moz Pro | | Group160 -
On Page Optimization Reports - Huh?
I've been working hard to use this EXCELLENT tool for optimize some of what I consider my most important pages . . . But the automatic tool that pulls pages and grades them (the "summary" of the "on page" report) . . . I don't get it. It only graded three of my pages, and I don't understand how it chose what keywords to grade it for? I'm just very confused. I don't understand how it chose the pages to grade, not the words it chose to grade it against. 😞
Moz Pro | | damon12120 -
Crawl Diagnostics bringing 20k+ errors as duplicate content due to session ids
Signed up to the trial version of Seomoz today just to check it out as I have decided I'm going to do my own SEO rather than outsource it (been let down a few times!). So far I like the look of things and have a feeling I am going to learn a lot and get results. However I have just stumbled on something. After Seomoz dones it's crawl diagnostics run on the site (www.deviltronics.com) it is showing 20,000+ plus errors. From what I can see almost 99% of this is being picked up as erros for duplicate content due to session id's, so i am not sure what to do! I have done a "site:www.deviltronics.com" on google and this certainly doesn't pick up the session id's/duplicate content. So could this just be an issue with the Seomoz bot. If so how can I get Seomoz to ignore these on the crawl? Can I get my developer to add some code somewhere. Help will be much appreciated. Asif
Moz Pro | | blagger0