Error 403
-
I'm getting this message "We were unable to grade that page. We received a response code of 403. URL content not parseable" when using the On-Page Report Card. Does anyone know how to go about fixing this? I feel like I've tried everything.
-
I am getting 403 errors for this crazy url:
How do I get rid of this error?
I am also getting 404 errors for pages that do not exist anymore. How do I get rid of those?
-
Great answers Mike!
Jessica, if you're still having issues with the Crawl Test and it seems like a tool issue, let us know at help@seomoz.org - you'll get a faster response from our Help Team for your tool questions that way (unless, of course, a mozzer like Mike beats us to it!)
-
I will check that out. Thank you so much!
-
Is there another folder on your server called resources? If so that maybe the problem. See this thread...
http://wordpress.org/support/topic/suddenly-getting-403-forbiden-error-on-one-page-only
I did run Xenu on your site and experienced the 403 error on that page only. There were other 404s that need to be fixed as well FYI,,,
-
http://www.truckdriverschools.com/resources/
Thank you so much for your help!
-
Jessica,
Is the page(s) in question indexed by Google? I
I would recommend trying another site crawl tool like Xenu Link Sleuth, GSiteCrawler and see if they are able to crawl the site without issue. Could also be something to do with your hosting company trying to prevent Denial of Service (DOS) attacks... If you want to send me the URL I am happy to crawl it for you with one of these tools.
-
It is actually wordpress. Everything looks fine when visiting the URL and inside the wordpress but when I grade the SEO content it gives me the 403 error.
It happened after I added the SEO text to a page that had images within the same text box. Does that make a difference?
-
seems like your website is blocking access to the file. A few questions:
1. Are you blocking robots in your txt file from this url?
2. Do you get the 403 erri when you manually visit the page?
3. What CMS if any are you using? If it is Joomla we've seen some strange things happen with some of the security modules when using crawl tools such as GSiteCrawler.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Multiple Countries, Same Language: Receiving Duplicate Page & Content Errors
Hello! I have a site that serves three English-speaking countries, and is using subfolders for each country version: United Kingdom: https://site.com/uk/ Canada: https://site.com/ca/ United States & other English-speaking countries: https://site.com/en/ The site displayed is dependent on where the user is located, and users can also change the country version by using a drop-down flag navigation element in the navigation bar. If a user switches versions using the flag, the first URL of the new language version includes a language parameter in the URL, like: https://site.com/uk/blog?language=en-gb In the Moz crawl diagnostics report, this site is getting dinged for lots of duplicate content because the crawler is finding both versions of each country's site, with and without the language parameter. However, the site has rel="canonical" tags set up on both URL versions and none of the URLs containing the "?language=" parameter are getting indexed. So...my questions: 1. Are the Duplicate Title and Content errors found by the Moz crawl diagnostic really an issue? 2. If they are, how can I best clean this up? Additional notes: the site currently has no sitemaps (XML or HTML), and is not yet using the hreflang tag. I intend to create sitemaps for each country version, like: .com/en/sitemap.xml .com/ca/sitemap.xml .com/uk/sitemap.xml I thought about putting a 'nofollow' tag on the flag navigation element, but since no sitemaps are in place I didn't want to accidentally cut off crawler access to alternate versions. Thanks for your help!
Moz Pro | | Allie_Williams0 -
Has any on else experienced a spike in crawl errors?
Hi, Since the last time our sites were crawled in SEOmoz they are all showing a spike in Errors. (Mainly duplicate page titles and duplicate content). We haven't changed anything to the structure of the sites but they are all using the same content management system. The image is an example of what we are witnessing for all our sites based on the same system. Is anyone else experiencing anything similar? or does anyone know of any changes that SEOmoz has implemented which may be affecting this? Thanks in advance, Anthony. WzdQV WzdQV WzdQV.jpg WzdQV.jpg
Moz Pro | | BallyhooLtd1 -
How can i locate the links on my site that are causing 404 errors?
In the 404 error report, there is no way to find what page on my site has the broken link.. how can i find them??? help. Matt
Moz Pro | | seo4anyone0 -
Status Errors generated from xml site map
I just ran a crawl test on our site and I'm seeing a lot of 404 errors that are referredt from the xml sitemap.. Anyone know how to fix it?
Moz Pro | | IITWebTeam0 -
What does this error mean when starting a new campaign
hi just about to start a new campaign to i can monitor my website and i get this message up but not sure what this means. Roger has detected a problem: We have detected that the domain www.headlineplus.com and the domain headlineplus.com both respond to web requests and do not redirect. Having two "twin" domains that both resolve forces them to battle for SERP positions, making your SEO efforts less effective. We suggest redirecting one, then entering the other here. can anyone please let me know why this message comes up please
Moz Pro | | ClaireH-1848860 -
Why am I getting 400 client errors on pages that work?
Hi, I just done the initial crawl on y domain and I sem to have 80 400 client errors. However when I visit the urls the pages are loading fine. Any ideas on why this is happening and how I can resolve the problem?
Moz Pro | | moesian0 -
Site explore reporting error over week
unable to dispaly anchor text error Doh! Roger is still working out the kinks with the new index and is having issues untangling anchor text data. We're currently showing anchor text data from the previous index, but we will update as soon as we can.
Moz Pro | | 1step2heaven120 -
Crawl Diagnostics bringing 20k+ errors as duplicate content due to session ids
Signed up to the trial version of Seomoz today just to check it out as I have decided I'm going to do my own SEO rather than outsource it (been let down a few times!). So far I like the look of things and have a feeling I am going to learn a lot and get results. However I have just stumbled on something. After Seomoz dones it's crawl diagnostics run on the site (www.deviltronics.com) it is showing 20,000+ plus errors. From what I can see almost 99% of this is being picked up as erros for duplicate content due to session id's, so i am not sure what to do! I have done a "site:www.deviltronics.com" on google and this certainly doesn't pick up the session id's/duplicate content. So could this just be an issue with the Seomoz bot. If so how can I get Seomoz to ignore these on the crawl? Can I get my developer to add some code somewhere. Help will be much appreciated. Asif
Moz Pro | | blagger0