H1 tag found on page, but saying doesn't match keyword
-
We've run a on-page grader test on our home page www.whichledlight.com with the keyword 'led bulbs'
it comes back with saying there is a H1 tag, although the content of the keyword apperently doesn't contain 'led bulbs... which seems a bit odd because the content of the tag is
'UK’s #1 Price Comparison Site for LED Bulbs`
I've used other SEO checkers and some say we don't even have a H1 tag, or H2, H3 and so on for any page.
Screaming Frog seems to think we have a H1 tag though, and can also detect the content of the tag.
Any ideas?
** Update **
The website is a single page app (EmberJS) so we use prerender to create snapshots of the pages.
We were under the impression that MOZ can crawl these prerendered pages fine, so were a bit baffled as to why it would say we have a H1 tag, but think the contents of the tag still doesn't match our keyword. -
I checked the source with my default user agent (in this case Firefox) and did NOT see an H1 tag.
I checked with my user agent set to GoogleBot and DID see an H1 tag, which did have that keyword phrase in it.
I checked again with a default user agent, but this time with JavaScript disabled, and could not see anything at all on the viewable page (blank white page), though the source code was there without the H1 tag.
So it seems to me like you're pre-rendering the page for GoogleBot, and are including the H1 (and other header tags) as part of a fully-rendered page for search engines. However, because that Header tag does not exist if you turn JavaScript off - or if you're not Google - there may be a risk of Google seeing this page as "cloaking".
Pre-rendering is good. It's not a "bad" type of cloaking if you serve the EXACT same page to search engines that you serve to everyone else. Unfortunately, this does not seem to be the case with the way this page is set up. Google sees one thing, other visitors (with or without JavaScript enabled) see something else.
I know developers are head-over-heels for single-page apps and JavaScrpt frameworks, but this stuff is starting to drive me nuts. It's like trying to optimize Flash sites all over again. On the one hand you have Google bragging about how great they are at crawling JavaScript, even going so far as to say pre-rendering is not necessary... And on the other hand there are clear, sustained, organic search traffic drops whenever developers start turning flat HTML/CSS pages into these single-page JavaScript framework applications.
My advice to you is that if you're going to Pre Render a page for Google, to A: make sure the page a user with JavaScript enabled sees is exactly the same as what Google sees, and B: See if you can pre-render pages for visitors without JavaScript enabled as well.
-
Yes, see what you mean.
We get the same if we view source.Inspect element shows it correctly.
I take it you mean SEO checkers are checking the source code.. before JS modifies it?
Do you think this is hurting our SEO?
-
I did a 'View Source' and 'Inspect on your homepage.
On View Source, there was no H1 Tag, however, on Inspect, there is clearly a H1 tag (H2, H3 exist too).
"View Source" typically shows what was received from the server before javascript modifies it. I suspect your developer wrote it this way to optimize for speed (with jQuery).
That being said, when you use the SEO checkers that claims you do not have a H1 tag, they are only reading the document and not the source code.
In short, yes, your website has a H1, H2 and H3 tags.
Just Curious, what results (content of H1) did the on-page grader came out with?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Site's pages has GA codes based on Tag Manager but in Screaming Frog, it is not recognized
Using Tag Assistant (Google Chrome add-on), we have found that the site's pages has GA codes. (also see screenshot 1) However, when we used Screaming Frog's filter feature -- Configuration > Custom > Search > Contain/Does Not Contain, (see screenshot 2) SF is displaying several URLs (maybe all) of the site under 'Does Not Contain' which means that in SF's crawl, the site's pages has no GA code. (see screenshot 3) What could be the problem why SF states that there is no GA code in the site's pages when in fact, there are codes based on Tag Assistant/Manager? Please give us steps/ways on how to fix this issue. Thanks! SgTovPf VQNOJMF RCtBibP
Intermediate & Advanced SEO | | jayoliverwright0 -
HTML5: Changing 'section' content to be 'main' for better SEO relevance?
We received an HTML5 recommendation that we should change onpage text copy contained in 'section" to be listed in 'main' instead, because this is supposedly better for SEO. We're questioning the need to ask developers spend time on this purely for a perceived SEO benefit. Sure, maybe content in 'footer' may be seen as less relevant, but calling out 'section' as having less relevance than 'main'? Yes, it's true that engines evaluate where onpage content is located, but this level of granular focus seems unnecessary. That being said, more than happy to be corrected if there is actually a benefit. On a side note, 'main' isn't supported by older versions of IE and could cause browser incompatibilities (http://caniuse.com/#feat=html5semantic). Would love to hear others' feedback about this - thanks! 🙂
Intermediate & Advanced SEO | | mirabile0 -
Our client's web property recently switched over to secure pages (https) however there non secure pages (http) are still being indexed in Google. Should we request in GWMT to have the non secure pages deindexed?
Our client recently switched over to https via new SSL. They have also implemented rel canonicals for most of their internal webpages (that point to the https). However many of their non secure webpages are still being indexed by Google. We have access to their GWMT for both the secure and non secure pages.
Intermediate & Advanced SEO | | RosemaryB
Should we just let Google figure out what to do with the non secure pages? We would like to setup 301 redirects from the old non secure pages to the new secure pages, but were not sure if this is going to happen. We thought about requesting in GWMT for Google to remove the non secure pages. However we felt this was pretty drastic. Any recommendations would be much appreciated.0 -
Page titles, Keyword rich or single keyword?
Hi Everybody, My previous SEO company had set the page titles like : keyword | keyword | keyword | keyword | keyword | keyword | keyword | My new one is changing every thing and replacing them with sentences including one or two keywords in each one. Could you please let me know which one is better approach? Thanks
Intermediate & Advanced SEO | | AlirezaHamidian0 -
How to make an AJAX site crawlable when PushState and #! can't be used?
Dear Mozzers, Does anyone know a solution to make an AJAX site crawlable if: 1. You can't make use of #! (with HTML snapshots) due to tracking in Analytics 2. PushState can't be implemented Could it be a solution to create two versions of each page (one without #!, so campaigns can be tracked in Analytics & one with #! which will be presented to Google)? Or is there another magical solution that works as well? Any input or advice is highly appreciated! Kind regards, Peter
Intermediate & Advanced SEO | | ConversionMob0 -
Robots.txt file - How to block thosands of pages when you don't have a folder path
Hello.
Intermediate & Advanced SEO | | Unity
Just wondering if anyone has come across this and can tell me if it worked or not. Goal:
To block review pages Challenge:
The URLs aren't constructed using folders, they look like this:
www.website.com/default.aspx?z=review&PG1234
www.website.com/default.aspx?z=review&PG1235
www.website.com/default.aspx?z=review&PG1236 So the first part of the URL is the same (i.e. /default.aspx?z=review) and the unique part comes immediately after - so not as a folder. Looking at Google recommendations they show examples for ways to block 'folder directories' and 'individual pages' only. Question:
If I add the following to the Robots.txt file will it block all review pages? User-agent: *
Disallow: /default.aspx?z=review Much thanks,
Davinia0 -
Optimising a page for multiple keywords
I remember reading a question a while back about seo for a page targetting multiple keywords but I'm blowed if I can find it now.... I have a page which is optimised for one phrase and want to add 5-6 phrases/keywords... obviously I can't stuff the all the keywords in the page title or the header 1 tag. So I have written the content to mention the other keywords trouble is not wanting to compromise the quality of the page so of the keywords/phrases I have only been able to use once in the content. I assumed as the phrases are all on the same topic/area that this should not really matter. Apart from link building with the correct anchor text is there anything else I should be doing? The other option is to create custom pages for the keyword but again I am not keen on this idea.... Any suggestions?
Intermediate & Advanced SEO | | JohnW-UK0 -
Can't find my site on Bing, since ages
Hi Guys, Well, the problem seems normal but I guess it's not. I have tried many things, and nothing changed it, now I give it last try... ask so maybe you will help me. The problem is.. I can't find my site nowhere in Bing, I mean nowhere by not in first 20 pages for my keywords "beauty tips" and the site is: http://www.beauty-tips.net/. In my opinion it should be pretty high... maybe it's too high so I can't see it ;). I never had special problems with Bing, was easier to be there "somewhere" than in google, but with this one is totally opposite. Any ideas? Thanks for your time!
Intermediate & Advanced SEO | | Luke220