"nocontent" class use for Google Custom Search: SEO Ramifications?
-
Hi all,
Have a client that uses Google Custom Search tool which is crawling, indexing and returning millions of irrelevant results for keywords that are on every page of the site. IT/Web dev. team is considering adding a class attribute to prohibit Google Custom Search from indexing bolierplate content regions.
Here's the link to Google's custom search help page:
http://support.google.com/customsearch/bin/answer.py?hl=en&answer=2364585
"...If your pages have regions containing boilerplate content that's not relevant to the main content of the page, you can identify it using the
nocontent
class attribute. When Google Custom Search sees this tag, we'll ignore any keywords it contains and won't take them into account when calculating ranking for your Custom Search engine. (We'll still follow and crawl any links contained in the text markednocontent
.)To use the
nocontent
class attribute, include the boilerplate content in a tag (for example,span
ordiv
) like this:Google Custom Search also notes:"Using
nocontent
won't impact your site's performance in Google Web Search, or our crawling of your site, in any way. We'll continue to follow any links in tagged content; we just won't use keywords to calculate ranking for your Custom Search engine."Just want to confirm if anyone can forsee any SEO implications the use of this div could create? Anyone have experience with this?Thank you! -
Hi happygirlftw (nice name!)
While I don't have any direct experience using the "nocontent" tag, I can't see any reason why it should hurt you. This seems to be exactly what it was designed for.
Adwords offers a similar type of tag for web publishers called section targeting. I've used it to great success and it doesn't have any effect on organic results.
Finally, I'd be curious as to why Google couldn't identify the boilerplate content of your site on it's own. Google custom search uses different algorithms than regular search, but in this day and age we encourage most webmasters to reduce their boilerplate content, so this might warrant a closer look.
Best of luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What should I do after a failed request for validation (error with noindex, nofollow) in new Google Search Console?
Hi guys, We have the following situation: After an error message in new google search console for a large amount of pages with noindex, nofollow tag, a validation is requested before the problem is fixed. (it's incredibly stupid decision taken before asking the SEO team for advice) Google starts the validation, crawls 9 URLs and changes the status to "Failed". All other URLs are still in "pending" status. The problem has been fixed for more than 10 days, but apparently Google doesn't crawl the pages and none of the URLs is back in the index. We tried pinging several pages and html sitemaps, but there is no result. Do you think we should request for re-validation or wait more time? It there something more we could do to speed up the process?
Intermediate & Advanced SEO | | ParisChildress0 -
Can I use duplicate content in different US cities without hurting SEO?
So, I have major concerns with this plan. My company has hundreds of facilities located all over the country. Each facility has it's own website. We have a third party company working to build a content strategy for us. What they came up with is to create a bank of content specific to each service line. If/when any facility offers that service, they then upload the content for that service line to that facility website. So in theory, you might have 10-12 websites all in different cities, with the same content for a service. They claim "Google is smart, it knows its content all from the same company, and because it's in different local markets, it will still rank." My contention is that duplicate content is duplicate content, and unless it is "localize" it, Google is going to prioritize one page of it and the rest will get very little exposure in the rankings no matter where you are. I could be wrong, but I want to be sure we aren't shooting ourselves in the foot with this strategy, because it is a major major undertaking and too important to go off in the wrong direction. SEO Experts, your help is genuinely appreciated!
Intermediate & Advanced SEO | | MJTrevens1 -
Is google seeing "all" my homepage?
Hello All 🙂 Since launching my new website design - www.advanced-driving.co.uk I am not convinced Google is seeing all the content on the page. I took a long extract of text and did a search on Google and nothing was found. Also although in the search results for "advanced driving course" I can see the new title tag, the snippet isn't showing.. Is there anyway I can check this? As a scroll down I can see the URL changes ie: www.advanced-driving.co.uk
Intermediate & Advanced SEO | | robert78
then:
http://www.advanced-driving.co.uk/#da-page_in_widget-3
then:
http://www.advanced-driving.co.uk/#da-page_in_widget-4
then:
http://www.advanced-driving.co.uk/#da-page_in_widget-5 Is this right? Thanks in advance..0 -
"noindex, follow" or "robots.txt" for thin content pages
Does anyone have any testing evidence what is better to use for pages with thin content, yet important pages to keep on a website? I am referring to content shared across multiple websites (such as e-commerce, real estate etc). Imagine a website with 300 high quality pages indexed and 5,000 thin product type pages, which are pages that would not generate relevant search traffic. Question goes: Does the interlinking value achieved by "noindex, follow" outweigh the negative of Google having to crawl all those "noindex" pages? With robots.txt one has Google's crawling focus on just the important pages that are indexed and that may give ranking a boost. Any experiments with insight to this would be great. I do get the story about "make the pages unique", "get customer reviews and comments" etc....but the above question is the important question here.
Intermediate & Advanced SEO | | khi50 -
Google's form for "Small sites that should rank better" | Any experiences or results?
Back in August of 2013 Google created a form that allowed people to submit small websites that "should be ranking better in Google". There is more info about it in this article http://www.seroundtable.com/google-small-site-survey-17295.html Has anybody used it? Any experiences or results you can share? *private message if you do not want to share publicly...
Intermediate & Advanced SEO | | GregB1230 -
Google Custom Searches with site CSS
Anyone good with GCS. I want to add Google custom searches in my site but with my site CSS.
Intermediate & Advanced SEO | | csfarnsworth
I need results from GCS but want to display with my website CSS. Website is in OSCommerce and php.0 -
Using the Word "Free" in Metadata
Hi Forum! I've searched previous questions, and couldn't find anything related to this. I know the word "free" when used in email marketing can trigger spam filters. If I use the word "free" in my metadata (title tag, description, and keywords just for fun) will I be penalized in any way? Thanks!
Intermediate & Advanced SEO | | Travis-W0 -
How to get your company on the Google +, Right Hand side box of search results?
http://www.searchenginejournal.com/google-plus-content-replaces-ads/41452/ We have a Google plus page, but the results aren't coming up there Do you need a certain amount of people in your circles, what is the criteria to get your brand here? Any links?
Intermediate & Advanced SEO | | xoffie0