Error 403
-
I'm getting this message "We were unable to grade that page. We received a response code of 403. URL content not parseable" when using the On-Page Report Card. Does anyone know how to go about fixing this? I feel like I've tried everything.
-
I am getting 403 errors for this crazy url:
How do I get rid of this error?
I am also getting 404 errors for pages that do not exist anymore. How do I get rid of those?
-
Great answers Mike!
Jessica, if you're still having issues with the Crawl Test and it seems like a tool issue, let us know at help@seomoz.org - you'll get a faster response from our Help Team for your tool questions that way (unless, of course, a mozzer like Mike beats us to it!)
-
I will check that out. Thank you so much!
-
Is there another folder on your server called resources? If so that maybe the problem. See this thread...
http://wordpress.org/support/topic/suddenly-getting-403-forbiden-error-on-one-page-only
I did run Xenu on your site and experienced the 403 error on that page only. There were other 404s that need to be fixed as well FYI,,,
-
http://www.truckdriverschools.com/resources/
Thank you so much for your help!
-
Jessica,
Is the page(s) in question indexed by Google? I
I would recommend trying another site crawl tool like Xenu Link Sleuth, GSiteCrawler and see if they are able to crawl the site without issue. Could also be something to do with your hosting company trying to prevent Denial of Service (DOS) attacks... If you want to send me the URL I am happy to crawl it for you with one of these tools.
-
It is actually wordpress. Everything looks fine when visiting the URL and inside the wordpress but when I grade the SEO content it gives me the 403 error.
It happened after I added the SEO text to a page that had images within the same text box. Does that make a difference?
-
seems like your website is blocking access to the file. A few questions:
1. Are you blocking robots in your txt file from this url?
2. Do you get the 403 erri when you manually visit the page?
3. What CMS if any are you using? If it is Joomla we've seen some strange things happen with some of the security modules when using crawl tools such as GSiteCrawler.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Site Crawl Error
In moz crawling error this message is appears: MOST COMMON ISSUES 1Search Engine Blocked by robots.txt Error Code 612: Error response for robots.txt i asked help staff but they crawled again and nothing changed. there's only robots.XML (not TXT) in root of my webpage it contains: User-agent: *
Moz Pro | | nopsts
Allow: /
Allow: /sitemap.htm anyone please help me? thank you0 -
How do fix an 803 Error?
I got am 803 error this week on the Moz crawl for one of my pages. The page loads normally in the browser. We use cloudflare. Is there anything that I should do or do I wait a week and hope it disappears? 803 Incomplete HTTP response received Your site closed its TCP connection to our crawler before our crawler could read a complete HTTP response. This typically occurs when misconfigured back-end software responds with a status line and headers but immediately closes the connection without sending any response data.
Moz Pro | | Zippy-Bungle1 -
Why do I see a duplicate content errors when rel="canonical" tag is present
I was reviewing my first Moz crawler report and noticed the crawler returned a bunch of duplicate page content errors. The recommendations to correct this issue are to either put a 301 redirect on the duplicate URL or use the rel="canonical" tag so Google knows which URL I view as the most important and the one that should appear in the search results. However, after poking around the source code I noticed all of the pages that are returning duplicate content in the eyes of the Moz crawler already have the rel="canonical" tag. Does the Moz crawler simply not catch whether that tag is being used? If I have that tag in place, is there anything else I need to do in order to get that error to stop showing up in the Moz crawler report?
Moz Pro | | shinolamoz0 -
Title Missing or Empty 6000+ errors
This seems one issue that can affect my curent rank of site. I own http://www.funnygusta.com and i saw yesterday a drop in traffic from 6000uniques to 3500. Yesterday also i received a new crawl report from seomoz. There was like more than 6000 title missing or empty errors. This have a direct corelation between rank and these errors? Also its weird that i use title on all posts also i use seo plugin by yoast and titles are automatic filled. Any ideea from where to begin? thanks
Moz Pro | | xplicit0 -
Wordpress Plugin Causing Mobile Switcher 404 Errors
Hi All, Has anyone seen this 404 error before? I installed a wordpress plugin that would allow users to switch between a mobile and desktop theme. This was about 6 months ago and I thought nothing of it. Since I am new to SeoMoz, I have become aware of this lovely problem. After setting up my campaign, I now have 1600 something 404 errors due to this plugin. It looks like this plugin creates 4 to 5 links for each one of my posts and they all return up as a 404 error. Example: http://frogfanreport.com/football/page/36/ or http://frogfanreport.com/football/page/36/?wpmp_switcher=desktop I just noticed this morning in Google Webmaster tools that the errors are starting to show up there. Has anyone seen this? Or know if this is a problem and what to do? zach
Moz Pro | | TCUFrogFanReport0 -
After fixing errors can I re-crawl for diagnostics?
As I am fixing errors will the campaign automatically update to show where I have fixed issues?
Moz Pro | | eidna220 -
Errors on my Crawl Diagnostics
I have 51 errors on my Crawl Diagnostics tool.46 are 4xx Client Error.Those 4xx errors are links to products (or categories) that we are not selling them any more so there are inactive on the website but Google still have the links. How can I tell Google not to index them?. Can those errors (and warnings) could be harming my rankings (they went down from position 1 to 4 for the most important keywords) thanks,
Moz Pro | | cardif0 -
What causes Crawl Diagnostics Processing Errors in seomoz campaign?
I'm getting the following error when seomoz tries to spider my site: First Crawl in Progress! Processing Issues for 671 pages Started: Apr. 23rd, 2011 Here is the robots.txt data from the site: Disallow ALL BOTS for image directories and JPEG files. User-agent: * Disallow: /stats/ Disallow: /images/ Disallow: /newspictures/ Disallow: /pdfs/ Disallow: /propbig/ Disallow: /propsmall/ Disallow: /*.jpg$ Any ideas on how to get around this would be appreciated 🙂
Moz Pro | | cmaddison0