Html and css errors - what do SE spiders do if they come across coding errors? Do they stop crawling the rest of the code below the error
-
I have a client who uses a template to build their websites (no problem with that) when I ran the site through w3c validator it threw up a number of errors, most of which where minor eg missing close tags and I suggested they fix them before I start their off site SEO campaigns.
When I spoke to their web designer about the issues I was told that some of the errors where "just how its done" So if that's the case, but the validator still registers the error, do the SE spiders ignore them and move on, or does it penalize the site in some way?
-
Ryan, thanks so much for taking the time to answer, and so comprehensively too, I really appreciate it.
My client came around after I suggested that Getting quality backlinks to a website full of coding errors was like hanging a crystal chandelier in a toilet!! and that they where tying one of my hands behind my back by not sorting it out. Perhaps not the most expert answer but they got the point.
Thanks for some great information and a great answer all round.
-
**When I spoke to their web designer about the issues I was told that some of the errors where "just how its done" **
Are you OK with that response? If your client asked you why you took a course of action on their site would you expect the client to accept "it's just how things are done"?
Generally speaking, sites should use valid code. The W3C is the international body which establishes coding standards. They are made up of a group of people including representatives from Microsoft (IE), Google (Chrome), Mozilla (FireFox), Apple (Safari), etc. Valid code should appear correctly in all browsers.
Generally speaking again, a developer who writes valid code is following best coding practices. The code can be more easily reviewed by other developers. When invalid code is used, it is often due to sloppy coding practices such as not closing tags, using deprecated tags, not being familiar with the particular encoding of the language in use, etc. When I ask a developer why the code is not valid and the response is "it's just how things are done" the translation often is "I lack the knowledge / training / experience to write valid code".
Ok, now that I angered many developers let me take the flip side of the coin. Google.com does not validate. What's up with that? Well, you know the development team at Google is among the best in the world. Their project leaders likely have their doctorate degrees or at least master degrees. Many of them are authors of books on best coding practices. These guys clearly understand all the rules and are able to go past them to achieve better results in a given area, such as speed optimization which Google treasures.
In summary, leading companies can often employee the upper echelon of employees who thoroughly understand the rules and can break them for their benefit. Unfortunately, that does not trickle down to every day developers. Most of them do not have the knowledge / training / experience to make those calls and are simply either using sloppy coding practices or they are not taking the time to research other alternatives. They have deadlines and they jump on whatever works.
what do SE spiders do if they come across coding errors? Do they stop crawling the rest of the code below the error
The results vary based on the Search Engine and the type of error. Here are some examples:
1. There are some errors due to the "&" being used instead of the binary operator "&". Sometimes there are issues with various code where the & character may have another purpose and the interpreter may try to perform an operation on the code such as concatenation rather then simply reading the & as a character.
2. In html,
is a perfectly valid tag. In XHTML, there is a rule that any tags which are not used in a pair should be end in />. In other words, the correct form of the
tag in XHTML is
. If you have an XHTML document which generates 20 errors, and all of those errors are due to the developer using
instead of
then a crawler should handle that issue very well. The crawler recognizes and understands the
tag even though it is technically invalid code.3. An open div tag can cause a variety of issues. It all depends on what operation the div is performing. It could be very minor or a major issue.
Google does a great job of handling invalid code. Bing seems less tolerant of coding errors and much more selective.
A video you will likely enjoy: http://www.youtube.com/watch?v=FPBACTS-tyg
Summary
You should strive for valid code with your site. Coding errors can cause a variety of issues including making it harder for other developers to work on the site, causing the site to appear incorrectly in various browsers or devices, negatively impacting page loading times, and impeding search engine crawlers. It is not possible to say without a review of the specific error. While I do not develop websites, I do project manage the development of many sites. When the site is complete, the goal is to not have any validation errors. If a handful of errors exist, I request for the developer to try to eliminate them. If they cannot, I request an error-by-error explanation of why the error exists and why it cannot be eliminated. The result is a site which appears correctly in all browsers, is correctly crawled and interpreted by search engines, and is easily maintained by various developers.
A final note: just because a page validates does not mean it is developed well, and the reverse is true also. I would say with the exception of the top 1% of sites which are developed by teams of very well trained and experienced web professionals, sites which validate are likely better designed and maintained then sites which do not validate.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Website server errors
I launched a new website at www.cheaptubes.com and had recovered my search engine rankings as well after penguin & panda devestation. I'm was continuing to improve the site Sept 26th by adding caching of images and W3 cache but moz analytics is now saying I went from 288 medium issues to over 600 and i see the warning "45% of site pages served 302 redirects during the last crawl". I'm not sure how to fix this? I'm on WP using Yoast SEO so all the 301's I did are 301's not 302's. I do have SSL, could it be Http vs Https? I've asked this question before and two very nice people replied with suggestions which I tried to implement but couldn't, i got the WP white screen of death several times. They suggested the code below. Does anyone know how to implement this code or some other way to reduce the errors I'm getting? I've asked this at stackoverflow with no responses. "you have a lot of http & https issues so you should fix these with a bit of .htaccess code, as below. RewriteEngine On
On-Page Optimization | | cheaptubes
RewriteCond %{HTTPS} !=on
RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [R,L] You also have some non-www to www issues. You can fix these in .htaccess at the same time... RewriteCond %{HTTP_HOST} !^www.
RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] You should find this fixes a lot of your issues. Also check in your Wordpress general settings that the site is set to www.cheaptubes.com for both instances." When I tried to do as they suggested it gave me an internal server error. Please see the code below from .htaccess and the server error. I took it out for now. BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]
RewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule ^.$ https://%{SERVER_NAME}%{REQUEST_URI} [R,L]
RewriteCond %{HTTP_HOST} !^www. RewriteRule ^(.)$ http://www.%{HTTP_HOST}/$1 [R=301,L]</ifmodule> END WordPress Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, webmaster@cheaptubes.com and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Additionally, a 500 Internal Server Error error was encountered while trying to use an ErrorDocument to handle the request.0 -
If I put 'keyword/url' combination to 'stop run weekly', will it dissapear from the summary page in the on-page grader?
The summary page of the on-page grader chooses the keyword and url combination itself. Now if I choose another combination, I would like the former to dissapear from the summary page. The only option is 'stop running weekly'. But will it disappear from the list also?
On-Page Optimization | | jongeneelbv0 -
Big problem with my new crawl report
I am owner of small opencart online store. I installed http://www.opencart.com/index.php?route=extension/extension/info&extension_id=6182&filter_search=seo. Today my new crawl report is awful. The number of errors is up by 520 (30 before), up with 1000 (120 before), notices up with 8000 (1000 before). I noticed that the problem is with search. There is a lot duplicate content in search only. What to do ?
On-Page Optimization | | ankali0 -
It looks like there are several title tags on my homepage. This was done presumably to embed titles for pop up windows. This is causing an error report in DMOZ. Do the engines also veiw this negatively?
My homepage has a normal title tag, but when I look in the code I find the developer also added title tags for pop ups within the homepage code. Is this causing an issue with the search engines?
On-Page Optimization | | Furious-D0 -
Errors in URL´s
SEOMOZ is showing quite a lot of URL Errors like this: http://trampoliny.net.pl/akcesoria/pokrowiec-basic?frontend=1825cb1eea3af8ee6ee2d96617d32ff6 All these URL´s use the parameter "?frontend=". In webmaster tools we told google not to index this parameter. Unfortunately at the moment we cannot set this parameter as "NOINDEX". We also dont want to use a robots.txt file. How to get rid of the URLS in Seomoz?
On-Page Optimization | | drgoodcat0 -
Too many on page links in sitemap.html
My crawl report is flagging an issue with too many links to one of my pages, this page is my sitemap.html. However, I have coded the page so that if required is specified it generates an .xml version of the page and if not then the html version is displayed. What is the best way to stop the crawl finding the html version whilst maintaining it on the site for clients navigation?
On-Page Optimization | | SamPenno0 -
URL 404 errors after crawl? HELP!
I am getting Crawl errors. It shows multiple pages as. I know this is more of a technical question however, I cannot find the answer anywhere. I'm using wordpress www.mydomain.com/title-of-page/mydomain.com/contact WHAT IS THIS?!
On-Page Optimization | | ChristineWeinbrecht0 -
CSS and Spiders
I am using website auditor for my onsite SEO and I am getting a ton of CSS warnings. How important is to fix these issues? Where would you place this in you priority list?
On-Page Optimization | | SEODinosaur0