How do fix an 803 Error?
-
I got am 803 error this week on the Moz crawl for one of my pages. The page loads normally in the browser. We use cloudflare.
Is there anything that I should do or do I wait a week and hope it disappears?
803 Incomplete HTTP response received
Your site closed its TCP connection to our crawler before our crawler could read a complete HTTP response. This typically occurs when misconfigured back-end software responds with a status line and headers but immediately closes the connection without sending any response data.
-
Kristina from Moz's Help Team here. Here is the working link to our Crawl Errors resource guide if you're still needing it!
https://moz.com/help/guides/moz-pro-overview/crawl-diagnostics/errors-in-crawl-reports
-
It would be great to read more about this issue here. I would love to debug/troubleshoot the 803 Errors, but I have no idea where to start. One problem: It's not possible to adjust the crawl-speed/delay of the moz-bot so I can't tell it the bot is the problem or not. Any suggestion out there how to debug a 803 crawl error?
TIA,
Jörg
-
Hi Sha,
The first link with the complete list is not working. I would love to access it. Where can I find the link?
Thanks in advance, Michiel
-
Same here, I found error on 803 in an image, What to do now? Can you pls help?
Thnaks
-
Hi,
Found a 803 Error in an image. Does that mean I should compress/improve somehow the image, or is it a web server error?
Thank you,
-
So if it is a standard wordpress page would the issue likely to be with the wordpress code - or my on-page content?
-
Hi Zippy-Bungle,
To understand first why the 803 error was reported:
When a page is called, the web server sends header details of what's to be displayed. You can see a complete list of these HTTP header fields here.
One of the headers sent by the web server is Content-length, which indicates how many bytes the rest of the page is going to send. So let's say for example that content length is 100 bytes but the server only sends 74 bytes (it may be valid HTML, but the length does not match the content length indicated)
Since the web server only sent 74 bytes and the crawler expected 100 bytes the crawler sees a TCP close port error because it is trying to read the number of bytes that the webserver said it was going to send. So you get an 803 error.
Now browsers don't care when a mismatch like this happens because Content-length is an outdated component for modern browsers, but Roger Mozbot (the Moz crawler, identified in your logs as RogerBot) is on a mission to show you any errors that might be occurring. So Roger is configured to detect and report such errors.
The degree to which an 803 error will adversely affect crawl efficiency for search engine bots such as Googlebot, Bingbot and others will vary, but the fundamental problem with all 8xx errors is that they result from violations of the underlying HTTP or HTTPS protocol. The crawler expects all responses it receives to conform to the HTTP protocol and will typically throw an exception when encountering a protocol-violating response.
In the same way that 1xx and 2xx errors generally indicate a badly-misconfigured site, fixing them should be a priority to ensure that the site can be crawled effectively. It is worth noting here that bingbot is well known for being highly sensitive to technical errors.
So what makes the mismatch happen?
The problem could be originating from the website itself (page code), the server, or the web server. There are two broad sources:
- Crappy code
- Buggy server
I'm afraid you will need to get a tech who understands this type of problem to work through each of these possibilities to isolate and resolve the root cause.
The Moz Resource Guide on HTTP Errors in Crawl Reports is also worth a read in case Roger encounters any other infrequently seen errors.
Hope that helps,
Sha
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Blog archive pages in Craw Error Report
Hi there, I'm new to MOZ Pro and have a question. My scan shows Archive pages as having crawl issues, but this is because Yoast is set up to block robots on these pages. Should I be allowing search engines to crawl these pages, or am I fine to leave them as I have it set up already? Any advice is greatly appreciated.
Moz Pro | | mhenshall
Marc0 -
How to Fix 404 Errors
Hey Moz'ers - I just added a new site to my Moz Pro account and when I got the crawl report back there was a ton of 404 errors (see attached). I realize the best way to fix these is to manually go through every single error and see what the issue is... I just don't have time right now, and I don't have a team member that can jump on this either, but realize this will be a huge boost to this client if/when I get these resolved... So my question is: Is there a quicker way to get these resolved? Is there an outsourcing company that can fix my clients errors correctly? Thanks for the help in advance:) wBhzEeV
Moz Pro | | 2Spurs0 -
Webmaster Tools shows mystery errors that Moz does not
One of my campaigns is doing great in the sense that the website has been running fault free for a few months now. Great, of course! But... in Google Webmaster Tools errors keep coming in showing older media documents and pages. And it does not say where they are from. Probably this is more a Google question, but I thought I'd try to find some answers here first. I would appreciate any suggestions and help. Monique
Moz Pro | | MarketingEnergy0 -
404 Errors generating in WP
Our crawl reports are generating back several 404 errors for pages with urls that look like: /category/consulting/page/5/ The tag changes, the page number changes, but the result is always the same: A big glaring 404. Our sites are built on WordPress Multi-site, and I am fairly certain this issue is on the WP end, but I can't figure out why it is generating pages out to infinity, essentially, from the tags and categories. It is worse on some sites than others, but is happening across the board (my initial concern was that it might be a theme issue, but that does not seem to be the case). If anyone has run into this issue and knows a fix, you're insight would be greatly appreciated. Thanks!
Moz Pro | | SIXSEO0 -
What's the best way for SEO newbie to analyze & fix a site after being hit by Panda?
Hi, I have a prospective client who was in the top 3 on Google for two of their primary keywords. They fell way back in the rankings immediately after Panda was rolled out on September 27, 2012. Two weeks ago, they were at position #118 for one keyword. After looking for them in Rank Checker today, they cannot be found in Google at all. Here's my question. Because of the "bad links" (some pointing to Porno sites)... what's the possibility that this situation cannot be fixed? I don't know... maybe I'm asking an irrelevant question. I'm attempting to assess the situation so I can go back and present my findings to the prospective client. I'm committed to understanding what's going on with their website so I can assess the situation properly. Fixing their problem starts with a correct assessment. They have a ranking problem, and I know I can fix that... IF all their site needs is white hat <acronym title="Search Engine Optimization">SEO</acronym>. What I DON'T want is... to sell them <acronym title="Search Engine Optimization">SEO</acronym> services, only to find out in 3-6 months... I made an incorrect diagnosis of the problem, and therefore sold them the wrong solution. I know I can close the sale if I can show them with reasonable substantiation that the damage is not beyond repair. I'm familiar with the basics of <acronym title="Search Engine Optimization">SEO</acronym>, but I'm unfamiliar with how "bad linking" might effect the long-term commitment to optimizing. They're wondering if they should start over on another website. I was attempting to do an assessment to better understand if my typical approach on this site would be sufficient. Also... I wanted to get an assessment/report to show them something to substantiate my conclusion(s) about their website. If Open Site Explorer is sufficient to do the link analysis... great. At least I know I'm working with the right tool. All I have to do is learn how to use the tool quickly. At this point... I'm not sure which tool would be helpful. So... can you speak to the following 2 questions: 1.) How do you know when ranking problem is beyond fixing? 2.) What software/tool is ideal for doing some link analysis in order to assess the problem, and prescribe a solution? Thanks so much! Ramon
Moz Pro | | 48dayscoach0 -
Joined yesterday, today crawl errors (incorrectly) shows as zero...
Hi. We set up our SEOMoz account yesterday, and the initial crawl showed up a number of errors and warnings which we were in the process of looking at and resolving. I log into SEOMoz today and it's showing 0 errors, Pages Crawled: 0 | Limit: 10,000 Last Crawl Completed: Nov. 27th, 2012 Next Crawl Starts: Dec. 4th, 2012errors, warnings and notices show as 0, and the issues found yesterday show only in the change indicators.There's no way of getting to the results seen yesterday other than waiting a week?We were hoping to continue working through the found issues!
Moz Pro | | WorldText0 -
Crawl Diagnostic Errors
Hi there, Seeing a large number of errors in the SEOMOZ Pro crawl results. The 404 errors are for pages that look like this: http://www.example.com/2010/07/blogpost/http:%2F%2Fwww.example.com%2F2010%2F07%2Fblogpost%2F I know that t%2F represents the two slashes, but I'm not sure why these addresses are being crawled. The site is a wordpress site. Anyone seen anything like this?
Moz Pro | | rosstaylor0 -
Campaign 4XX error gives duplicate page URL
I ran the report for my site and had many more 4xx errors than I've had in the past month. I updated my .htaccess to include 301 statements based on Google Webmaster Tools Crawl Errors. Google has been reporting a positive downward trend in my errors, but my SEOmoz campaign has shown a dramatic increase in the 4xx pages. Here is an example of an 4xx URL page: http://www.maximphotostudio.net/engagements/266/inniswood_park_engagements/http:%2F%2Fwww.maximphotostudio.net%2Fengagements%2F266%2Finniswood_park_engagements%2F This is strange because URL: http://www.maximphotostudio.net/engagements/266/inniswood_park_engagements/ is valid and works great, but then there is a duplicate entry with %2F representing forward slashes and 2 http statements in each link. What is the reason for this?
Moz Pro | | maximphotostudio1