Do I use H1 tag for logo or page content?
-
Should the h1 tag be used for the main page content or the logo?
I understand the original method was too H1 the logo with the main search term, does this still hold true or should it be content focused?
-
No your brand name will always be your brand name. If they search for that your brand name should come up no 1. So do not waste a H1/SEO opportunity unless the boss demands it. The searcher is looking for you, you should rank no 1 for your brand name so in that instance the H1 brand name has no impact.
The exceptions could be where the brand name is or includes the keyword target etc. So may need to think a little more laterally. So if the company is called "Health Insurance Inc" & and is chasing the word Health Insurance. In that instance making your H1 your brand name... or part thereof is a good idea!.
The H1 should be helpful to both google and the customer - so frame your H1 in that light for each page.
Hope that assists.
,
-
Thanks interesting, makes good sense.
So to clarify an exception might be if you were promoting brand name? -
No the H1 is important for google. It helps match a searchers intent with the page. ie if looking for beds ... then ideally the H1 has the word bed in it. Additionally it assists in comforting the customer they are on a relevant site if the H1 in some manner matches the keyword (searchers intent) that was typed in.
The H1 assists in telling google what you page is about especially if the content on the page is not clear. There impact has been reduced but I believe still significant and simple enough to implement and do properly. Each page should answer a customer query so consider them as content title tags that help tell google understand what the page is about.
Each page should have a unique H1 tag and only one that describes that pages content and contains the target keyword for that page.
Hope that assists. To clearly answer your query. The H1 should not be your logo.. though there maybe rare page exceptions.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Are there any downsides to using a canonical tag temporarily?
I'm working on redesigning our website. One of the content types has a main archive page (/success-stories) containing all of the success stories (written by graduates of our program). Because we plan to have success stories for other people (non-graduates), I'm using category hierarchies (/success-stories/graduates and success-stories/nonprofits, for example). It will go one level deeper to organize graduates by graduation year (/success-stories/graduates/%year%). I think this will work out well. However, we won't have non-graduate success stories for a little while, probably at least a few weeks, which means that /success-stories and /.../graduates indices will contain the same content for a while. So my question is this: Will it hurt to use a canonical tag that points to /success-stories/graduates as the authority until the main archive page contains more than just graduates? Or would it be better to use a 302 redirect from /success-stories to /.../graduates until more diverse content is added?
Intermediate & Advanced SEO | | bcaples0 -
Should I use NoIndex on short-lived pages?
Hello, I have a large number of product pages on my site that are relatively short-lived: probably in the region of a million+ pages that are created and then removed within a 24 hour period. Previously these pages were being indexed by Google and did receive landings, but in recent times I've been applying a NoIndex tag to them. I've been doing that as a way of managing our crawl budget but also because the 410 pages that we serve when one of these product pages is gone are quite weak and deliver a relatively poor user experience. We're working to address the quality of those 410 pages but my question is should I be no-indexing these product pages in the first place? Any thoughts or comments would be welcome. Thanks.
Intermediate & Advanced SEO | | PhilipHGray0 -
Does having alot of pages with noindex and nofollow tags affect rankings?
We are an e-commerce marketplace at for alternative fashion and home decor. We have over 1000+ stores on the marketplace. Early this year, we switched the website from HTTP to HTTPS in March 2018 and also added noindex and nofollow tags to the store about page and store policies (mostly boilerplate content) Our traffic dropped by 45% and we have since not recovered. We have done I am wondering could these tags be affecting our rankings?
Intermediate & Advanced SEO | | JimJ0 -
Does using a hash menu system drive SEO power to my sub-pages?
My website (a large professional one) uses a interesting menu system. When a user hovers over text (which is not clickable), then a larger sub-menu appears on the screen, when they hover over something else, then this sub-menu changes or disappears. This menu is driven by a hash(#), which makes me wonder. I this giving my sub-pages an SEO kick? Or... is there another way that we should be doing this in order to get that SEO kick?
Intermediate & Advanced SEO | | adamorn0 -
Using the same content on different TLD's
HI Everyone, We have clients for whom we are going to work with in different countries but sometimes with the same language. For example we might have a client in a competitive niche working in Germany, Austria and Switzerland (Swiss German) ie we're going to potentially rewrite our website three times in German, We're thinking of using Google's href lang tags and use pretty much the same content - is this a safe option, has anyone actually tries this successfully or otherwise? All answers appreciated. Cheers, Mel.
Intermediate & Advanced SEO | | dancape1 -
How do I best handle Duplicate Content on an IIS site using 301 redirects?
The crawl report for a site indicates the existence of both www and non-www content, which I am aware is duplicate. However, only the www pages are indexed**, which is throwing me off. There are not any 'no-index' tags on the non-www pages and nothing in robots.txt and I can't find a sitemap. I believe a 301 redirect from the non-www pages is what is in order. Is this accurate? I believe the site is built using asp.net on IIS as the pages end in .asp. (not very familiar to me) There are multiple versions of the homepage, including 'index.html' and 'default.asp.' Meta refresh tags are being used to point to 'default.asp'. What has been done: 1. I set the preferred domain to 'www' in Google's Webmaster Tools, as most links already point to www. 2. The Wordpress blog which sits in a /blog subdirectory has been set with rel="canonical" to point to the www version. What I have asked the programmer to do: 1. Add 301 redirects from the non-www pages to the www pages. 2. Set all versions of the homepage to redirect to www.site.org using 301 redirects as opposed to meta refresh tags. Have all bases been covered correctly? One more concern: I notice the canonical tags in the source code of the blog use a trailing slash - will this create a problem of inconsistency? (And why is rel="canonical" the standard for Wordpress SEO plugins while 301 redirects are preferred for SEO?) Thanks a million! **To clarify regarding the indexation of non-www pages: A search for 'site:site.org -inurl:www' returns only 7 pages without www which are all blog pages without content (Code 200, not 404 - maybe deleted or moved - which is perhaps another 301 redirect issue).
Intermediate & Advanced SEO | | kimmiedawn0 -
"Duplicate" Page Titles and Content
Hi All, This is a rather lengthy one, so please bear with me! SEOmoz has recently crawled 10,000 webpages from my site, FrenchEntree, and has returned 8,000 errors of duplicate page content. The main reason I have so many is because of the directories I have on site. The site is broken down into 2 levels of hierachy. "Weblets" and "Articles". A weblet is a landing page, and articles are created within these weblets. Weblets can hold any number of articles - 0 - 1,000,000 (in theory) and an article must be assigned to a weblet in order for it to work. Here's how it roughly looks in URL form - http://www.mysite.com/[weblet]/[articleID]/ Now; our directory results pages are weblets with standard content in the left and right hand columns, but the information in the middle column is pulled in from our directory database following a user query. This happens by adding the query string to the end of the URL. We have 3 main directory databases, but perhaps around 100 weblets promoting various 'canned' queries that users may want to navigate straight into. However, any one of the 100 directory promoting weblets could return any query from the parent directory database with the correct query string. The problem with this method (as pointed out by the 8,000 errors) is that each possible permutation of search is considered to be it's own URL, and therefore, it's own page. The example I will use is the first alphabetically. "Activity Holidays in France": http://www.frenchentree.com/activity-holidays-france/ - This link shows you a results weblet without the query at the end, and therefore only displays the left and right hand columns as populated. http://www.frenchentree.com/activity-holidays-france/home.asp?CategoryFilter= - This link shows you the same weblet with the an 'open' query on the end. I.e. display all results from this database. Listings are displayed in the middle. There are around 500 different URL permutations for this weblet alone when you take into account the various categories and cities a user may want to search in. What I'd like to do is to prevent SEOmoz (and therefore search engines) from counting each individual query permutation as a unique page, without harming the visibility that the directory results received in SERPs. We often appear in the top 5 for quite competitive keywords and we'd like it to stay that way. I also wouldn't want the search engine results to only display (and therefore direct the user through to) an empty weblet by some sort of robot exclusion or canonical classification. Does anyone have any advice on how best to remove the "duplication" problem, whilst keeping the search visibility? All advice welcome. Thanks Matt
Intermediate & Advanced SEO | | Horizon0 -
Duplicate page content
Hi. I am getting error of having duplicate content on my website and pages its showing there are: www.mysitename.com www.mysitename.com/index.html As my best knowledge it only one page, I know this can be solved with some conical tag used in header, but do not know how. Can anyone please tell me about that code or any other way to get this solved. Thanks
Intermediate & Advanced SEO | | onlinetraffic0