Location of Content within the Code Structure
-
Hi guys,
When working with advanced modern websites it many times means that in order to achieve the look and feel we end up with pages that has almost 1000 lines of code or more. In some cases it is impossible to avoid it if we are to reach the Client's visual and technical specifications.Say the page is 1000 lines of code, and our content only starts at line 450 onwards, will that have an impact from a Google crawlability, hence affect our SEO making it harder to rank?
Thoughts?
Dan.
-
Yes it's most definitely a factor in rankings but as you say, to achieve visual perfection on a budget (using a theme and not coding from scratch) you do end up with a lot of code.
I always ensure my sites score as high as possible in speed tests, and the Html, Css, and Java are all properly minified (when possible), and that's about all you can do.
If the site scores at least a 90/100 in the page speed test then Google are not going to hold back a site that looks good and has great content because it has a lot of code in the site.
Most of all that code is for the browsers to render the site correctly but good Seo is mainly dependant on the content contained within certain tags. I just checked one of my sites, and it has 600 lines of code before my H1 tag, thanks to the revolution slider. But the site still ranks top 3 for many keywords and still achieves a 93/100 on page speed test.
All things equal, custom built flat html sites will always rank better than themes php template sites, but it's quite rare that all things are equal. Those 400 lines of code may be holding you back by 1 spot or 5 spots, but it's nothing that some good links or great content can't fix. I understand your point though, as it's a painfully slow process to fix that code.
-
if you have a high html/content rate, the google robot will need less time to crawl your site content.
In my opinion is an important SEO on page factor that´s why i´m always trying to:
-
avoid javascript inlines
-
have a clean & simple HTML code
-
optimize the code deleting unnecesary characters like spaces.
Last december, I changed my wordpress theme to one with a better/clean code, after 4/5 weeks I improve considerably my rankings, coincidence? I don´t think so
Br
//Oliver
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Content on desktop and mobile
My website hasn't using responsive design and separate domain for mobile optimize, But we using dynamic serving for mobile version, Read more about dynamic serving here So our website must different design for both version, And then what would be happen in term of SEO if our website hasn't show the same content as desktop but still align with the main content, Such as Desktop has longer content compare to mobile version or Desktop has long H1 but mobile is shorter than. What should we do for this case and how to tell Google Bot.
Technical SEO | | ASKHANUMANTHAILAND0 -
Not ranking - Scarped content
Hi, I have a problem with a website, that never compe up with before. The website is: https://www.enallaktikidrasi.com It has a bunch of excellent articles, good enough on-page SEO and a medium backlink profile. However, it is ranking just for very very few keywords. The major problem is that there are original articles that searched by their title won't appear in top100 results but they will appear in other websites that scapre them (even if they give a backlink to our original article!) Also, the website has good rankings in Bing and Yahoo but not in Google. There are keywords ranking in #1 in Bing but nowhere in top10 pages in Google.... I am guessing for 3 issues: 1. Majestic shows a very low trust score (just 13). However, the website has not got any kind of penalty in the last 3 years. 2. There are many scarpers. The odd is that scarpers with no real value outrank our content. (Scarpers with almost zero backlink profile) 3. We ran Sucuri on website as there were a large bots attack. Is there a correlation between it bots attack and Google results? (but why not in Bing and Yahoo too?) It seems like Google underestimates the website when indexing websites for some reason. Moreover, some of the articles are really the best around but the keywords they are targeted are not either within the 30 first pages... Any help?? Thanks..
Technical SEO | | alex33andros0 -
Duplicate Content
Crawl Diagnostics has returned several issues that I'm unsure how to fix. I'm guessing it's a canonical link issue but not entirely sure... Duplicate Page Content/Titles On a website (http://www.smselectronics.co.uk/market-sectors) with 6 market sectors but each pull the same 3 pages as child pages - certifications, equipment & case studies. On each products section where the page only shows X amount of items but there are several pages to fit all the products this creates multiple pages. There is also a similar pagination problem with the Blogs (auto generated date titles & user created SEO titles) & News listings. Blog Tags also seem to generate duplicate pages with the same content/titles as the parent page. Are these particularly important for SEO or is it more important to remove the duplication by deleting them? Any help would be greatly appreciated. Thanks
Technical SEO | | BBDCreative0 -
Content not being spidered
I've got a site with some serious content issues. The builder of the template doesn't understand what I'm asking (they're confusing spidering with indexing). If the page is run through a spider simulator (web confs won't work on this site for some reason) it shows the content is not being seen by Google. The template is Momentum and on Joomla. Most other sites I've found on the web have a similar issue. Basically it's reading the text in the header and footer, but nothing in the body. Any thoughts? www.rocksolidroof.com
Technical SEO | | GregWalt0 -
Google Schema Code for Organisation
I've created the Google Schema code for an organisation. Should this go in the template HTML so it would be shown on all pages or just on the home page?
Technical SEO | | CharlBritton0 -
Do dropdowns count as unique content?
My current site has some extensive unique database content by "widget" type. Currently we display this info into HTML 's, but we are considering utilizing this data in a dropdown field on each respective widget page. I want to ensure we don't have thin content...Does the content within the <option>tags on a dropdown count towards unique content?</option>
Technical SEO | | TheDude0 -
Does server location matter?
Hi guys, A friend's website is hosted in Germany (showing German IP in Flagfox) but it is a UK-based local business that only serves customers within a small radius covering 3 medium sized UK towns (they sell heavy construction materials for collection only). Should I advise him to change hosting location to the UK? Will this help him rank better for regional keyword searches & Google Places? He has some 'followed' links from UK sites (over 6 months old) that are not being picked up by Majestic, OSE or Webmaster Tools - is this likely to be connected to the server location? Thanks in advance for any help!
Technical SEO | | Tman30 -
Omniture tracking code URLs creating duplicate content
My ecommerce company uses Omniture tracking codes for a variety of different tracking parameters, from promotional emails to third party comparison shopping engines. All of these tracking codes create URLs that look like www.domain.com/?s_cid=(tracking parameter), which are identical to the original page and these dynamic tracking pages are being indexed. The cached version is still the original page. For now, the duplicate versions do not appear to be affecting rankings, but as we ramp up with holiday sales, promotions, adding more CSEs, etc, there will be more and more tracking URLs that could potentially hurt us. What is the best solution for this problem? If we use robots.txt to block the ?s_cid versions, it may affect our listings on CSEs, as the bots will try to crawl the link to find product info/pricing but will be denied. Is this correct? Or, do CSEs generally use other methods for gathering and verifying product information? So far the most comprehensive solution I can think of would be to add a rel=canonical tag to every unique static URL on our site, which should solve the duplicate content issues, but we have thousands of pages and this would take an eternity (unless someone knows a good way to do this automagically, I’m not a programmer so maybe there’s a way that I don’t know). Any help/advice/suggestions will be appreciated. If you have any solutions, please explain why your solution would work to help me understand on a deeper level in case something like this comes up again in the future. Thanks!
Technical SEO | | BrianCC0