Error 403
-
I'm getting this message "We were unable to grade that page. We received a response code of 403. URL content not parseable" when using the On-Page Report Card. Does anyone know how to go about fixing this? I feel like I've tried everything.
-
I am getting 403 errors for this crazy url:
How do I get rid of this error?
I am also getting 404 errors for pages that do not exist anymore. How do I get rid of those?
-
Great answers Mike!
Jessica, if you're still having issues with the Crawl Test and it seems like a tool issue, let us know at help@seomoz.org - you'll get a faster response from our Help Team for your tool questions that way (unless, of course, a mozzer like Mike beats us to it!)
-
I will check that out. Thank you so much!
-
Is there another folder on your server called resources? If so that maybe the problem. See this thread...
http://wordpress.org/support/topic/suddenly-getting-403-forbiden-error-on-one-page-only
I did run Xenu on your site and experienced the 403 error on that page only. There were other 404s that need to be fixed as well FYI,,,
-
http://www.truckdriverschools.com/resources/
Thank you so much for your help!
-
Jessica,
Is the page(s) in question indexed by Google? I
I would recommend trying another site crawl tool like Xenu Link Sleuth, GSiteCrawler and see if they are able to crawl the site without issue. Could also be something to do with your hosting company trying to prevent Denial of Service (DOS) attacks... If you want to send me the URL I am happy to crawl it for you with one of these tools.
-
It is actually wordpress. Everything looks fine when visiting the URL and inside the wordpress but when I grade the SEO content it gives me the 403 error.
It happened after I added the SEO text to a page that had images within the same text box. Does that make a difference?
-
seems like your website is blocking access to the file. A few questions:
1. Are you blocking robots in your txt file from this url?
2. Do you get the 403 erri when you manually visit the page?
3. What CMS if any are you using? If it is Joomla we've seen some strange things happen with some of the security modules when using crawl tools such as GSiteCrawler.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Error Code 804: HTTPS (SSL) Error Encountered
I'm seeing the following error in Moz as below. I have not seen any errors when crawling with different tools - is this common or is it something that needs to be looked at. SO far I only have info below. Assuming I would need to open a ticket with hosting provider for this? Thanks! Error Code 804: HTTPS (SSL) Error Encountered Your page requires an SSL security certificate to load (using HTTPS), but the Moz Crawler encountered an error when trying to load the certificate. Our crawler is pretty standard, so it's likely that other browsers and crawlers may also encounter this error. If you have this error on your homepage, it prevents the Moz crawler (and some search engines) from crawling the rest of your site.
Moz Pro | | w4rdy0 -
Seomoz crawl: 4XX (Client Error) How to find were the error are?
I got eight 404 errors with the Seomoz crawl, but the report does not says where the 404 page is linked from (like it does for dup content), or I'm I missing something? Thanks
Moz Pro | | PaddyDisplays0 -
Warnings, Notices, and Errors- don't know how to correct these
I have been watching my Notices, Warnings and Errors increase since I added a blog to our WordPress site. Is this effecting our SEO? We now have the following: 2 4XX errors. 1 is for a page that we changed the title and nav for in mid March. And one for a page we removed. The nav on the site is working as far as I can see. This seems like a cache issue, but who knows? 20 warnings for “missing meta description tag”. These are all blog archive and author pages. Some have resulted from pagination and are “Part 2, Part 3, Part 4” etc. Others are the first page for authors. And there is one called “new page” that I can’t locate in our Pages admin and have no idea what it is. 5 warnings for “title element too long”. These are also archive pages that have the blog name and so are pages I can’t access through the admin to control page title plus “part 2’s and so on. 71 Notices for “Rel Cononical”. The rel cononicals are all being generated automatically and are for pages of all sorts. Some are for a content pages within the site, a bunch are blog posts, and archive pages for date, blog category and pagination archive pages 6 are 301’s. These are split between blog pagination, author and a couple of site content pages- contact and portfolio. Can’t imagine why these are here. 8 meta-robot nofollow. These are blog articles but only some of the posts. Don’t know why we are generating this for some and not all. And half of them are for the exact same page so there are really only 4 originals on this list. The others are dupes. 8 Blocked my meta-robots. And are also for the same 4 blog posts but duplicated twice each. We use All in One SEO. There is an option to use noindex for archives, categories that I do not have enabled. And also to autogenerate descriptions which I do not have enabled. I wasn’t concerned about these at first, but I read these (below) questions yesterday, and think I'd better do something as these are mounting up. I’m wondering if I should be asking our team for some code changes but not sure what exactly would be best. http://www.seomoz.org/q/pages-i-dont-want-customers-to-see http://www.robotstxt.org/meta.html Our site is http://www.fateyes.com Thanks so much for any assistance on this!
Moz Pro | | gfiedel0 -
SEOmoz showing crawl errors but webmastertools says no errors, need help!
Hi this is my first question and i couldnt find a similar question on here. basically i have a clients website that is showing 150 duplicate page titles and content errors plus others. SEOmoz analysis is showing me for example is 3 duplicate hompage URLS: 1.www.domain.com 2.domain.com 3.www.domain.com/index.html all 3 are the same page. after explaining to the guy (who built the website) the errors, he ensured me that the main URL is URl 1. and the other 2 are 301 redirects. however SEOmoz analysis doesnt seem to change the results and webmastertools doesnt seem to show any errors at all. also if i try all 3 URL's there are no redirects to URL 1. any help or clarity would be awesome! Thanks e-bob
Moz Pro | | bobsnowzell0 -
High level of 404 client errors
My clients website is an e-commerce based website, where customers can go on and buy products etc from the website. I placed the website onto seomoz and it cam eback with something like 18,000 errors, mostly 404 client errors, when I checked to see what the URL was from, it was a summary of an order to a client who just purchased something from the website, this was the case for alot of the errors. So i am wondering, will this harm the site's optimisation or any other part of it? and how can I get rid of these errors? Many Thanks Charlene
Moz Pro | | Louise990 -
Crawl Diagnostics shows two title and meta tag errors but they are false positives.
I got one hit each on "Missing Meta Description Tag" and "Title Missing or Empty" but in the source of my page they are clearly there: <title>Protein Powder | Compare and Get the Best Prices</title> <meta name="keywords" content="protein powder, whey protein, protein supplement, whey protein isolate, hydrolyzed whey" /> I understand there are conventions which may or may not be followed by Drupal (I read an earlier question where ordering and W3C conventions were suggested) but i'm not sure how to fix them given Drupal will just overwrite any hand editing the next time something is built and importantly, I can't get the crawl to work on cue - it works on the automatic once a week crawl in the main campaign summary but every time I've specifically used the Crawl Test tool it gives me a "There was an error submitting your request to the crawler. Please try again later" so I can't really test any changes. Given Google seems to be recognising the title tag - ie showing it in the results - Do I put this down as seomoz just not working? Kind Regards, Brian
Moz Pro | | btrr690 -
SEOMoz says i have errors but goole webmaster doesnt show them - which one is right ?
I have about 350 websites all created in farcry 4.0 cms platform. When i do a site crawl using any seo tool ( seomoz, raven, screaming frog) it comes back telling me I have duplicate titles, description and content for a bunch of my pages. The pages are the same page its just that the crawl is showing the object Id and the friendly URL which is autocreated in the CMS as different pages. EXAMPLE these are the samge page but are recognised as different in SEOMOZ crawl test and therefore flagged as having duplicate title tags and content ... <colgroup span="1"><col style="width: 488pt; mso-width-source: userset; mso-width-alt: 23771;" span="1" width="650"></colgroup>
Moz Pro | | cassi
| www.westendautos.com.au/go/latest-news-and-specials <colgroup span="1"><col style="width: 488pt; mso-width-source: userset; mso-width-alt: 23771;" span="1" width="650"></colgroup>
| www.westendautos.com.au/index.cfm?objectid=9CF82BBD-9B98-B545-33BC644C0FA74C8E | | GOOGLE WEBMASTER however does not show me these errors ? It shows no errors at all. Now i believe i can fix this by chucking in a rel=canonical at the top of each page ? (a big job over 350 sites) But even so - my problem is that the website developers are telling me that SEOMOZ and all the other tools are wrong - that google will see these the way it should, that the object ID's would not get indexed ( although i have seen at least one object id show up in the serps.) Do i believe the developers and trust that google has it sorted or go through the process of hassling the developers to get a rel=canonical added to all the pages? (the issue sees my homepage as about 4 different pages www.domain.com/ www.domain.com/home /index AND object id.0 -
Why am I getting duplicate content errors on same page?
In the SEOmoz tools I am getting multiple errors for duplicate page content and duplicate page titles for one section on my site. When I check to see which page has the duplicate title/content the url listed is exactly the same. All sections are set up the same, so any ideas on why I would be getting duplication errors in just this one section and why they would say the errors are on the same page (when I only have one copy uploaded on the server)?
Moz Pro | | CIEEwebTeam0