How to block text on a page to be indexed?
-
I would like to block the spider indexing a block of text inside a page , however I do not want to block the whole page with, for example , a noindex tag.
I have tried already with a tag like this :
chocolate pudding
chocolate pudding
However this is not working for my case, a travel related website.
thanks in advance for your support.
Best regards
Gianluca
-
Gianluca,
Rand's whiteboard Friday a couple of weeks ago may help you: http://moz.com/blog/handling-duplicate-content-across-large-numbers-of-urlsThough the Whiteboard Friday is about duplicate content issues, 1 piece you can probably us from it is this: embed an iframe on page of the content to leave the content out of the index and the content will not be perceived to be part of the URL when using iframe. Add “noindex” in the HTML doc in the iframe to be 100% sure that search engines do not index it.
-
There aren't too many ways to achieve this without it looking a little odd to Google. The use of Images is probably the only real world way, but do remember that Google can view images well, and I have always advised anyone wanting to do this, to avoid it.
I haven't tried this myself, but can see it working by using iframes and then Disallowing them in Robots.txt
http://stackoverflow.com/questions/15685205/noindex-tag-for-googleAndy
-
@chris - thanks for your reply. yes I realised only after I used it that this solution won't apply to web search. it is a possibility to put the text in an immage, however, since it will be a lot of text in many different product pages, I was looking for something easier to automate. any other possibilities through tags?
-
That was a good line; I will try to remember to give you attribution. Like your stuff on here.
Best -
Unfortunately, I haven't had the opportunity. I'd love to get my hands on one though--it'd be like holding a baby google in your arms
-
Chris,
Do you work with the Search Appliance? Would love to speak with you about it if so.
Thanks, great answer.
Robert
-
Gianluca,
The Googleoff: snippet is not used for web-search, it's only used with the Google Search Appliance. Could you can put the text you want to keep out of the snippet into an image?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Search Console Indexed Page Count vs Site:Search Operator page count
We launched a new site and Google Search Console is showing 39 pages have been indexed. When I perform a Site:myurl.com search I see over 100 pages that appear to be indexed. Which is correct and why is there a discrepancy? Also, Search Console Page Index count started at 39 pages on 5/21 and has not increased even though we have hundreds of pages to index. But I do see more results each week from Site:psglearning.com My site is https://wwww.psglearning.com
Technical SEO | | pdowling0 -
Delete indexed spam pages
Hi everyone, I'm hoping someone had this same situation, or may know of a solution. One of our sites was recently pharmahacked 😞 We found an entire pharmaceutical site in one of the folder of our site. We were able to delete it, but now Google is showing us on not found error for those pages we deleted. First, I guess the question is will this harm us? If so, anyway we can fix this? Obliviously we don't want to do a 303 redirect for spam pages. Thanks!
Technical SEO | | Bridge_Education_Group0 -
Why blocking a subfolder dropped indexed pages with 10%?
Hy Guys, maybe you can help me to understand better: on 17.04 I had 7600 pages indexed in google (WMT showing 6113). I have included in the robots.txt file, Disallow: /account/ - which contains the registration page, wishlist, etc. and other stuff since I'm not interested to rank with registration form. on 23.04 I had 6980 pages indexed in google (WMT showing 5985). I understand that this way I'm telling google I don't want that section indexed, by way so manny pages?, Because of the faceted navigation? Cheers
Technical SEO | | catalinmoraru0 -
Is using JavaScript injected text in line with best practice on making blocks of text non-crawlable?
I have an ecommerce website that has common text on all the product pages, e.g. delivery and returns information. Is it ok to use non-crawlable JavaScript injected text as a method to make this content invisible to search engines? Or is this method frowned upon by Google? By way of background info - I'm concerned about duplicate/thin content, so want to tackle this by reducing this 'common text' as well as boosting unique content on these pages. Any advice would be much appreciated.
Technical SEO | | Coraltoes770 -
From page 1th to page 18th @ Google
Hello Mozzers! I have a question, you may help.. How may it be possible that a page ranking well (1th result) goes from 1th result to the 18th page just in 1 day? It doesnt seem to be any kind of penalization.. I now had all suspicious outgoing links to be nofollow (they were not before), this may be a cause .. (?) Do you have any other suggestion? Thanks
Technical SEO | | socialengaged0 -
Best way to handle indexed pages you don't want indexed
We've had a lot of pages indexed by google which we didn't want indexed. They relate to a ajax category filter module that works ok for front end customers but under the bonnet google has been following all of the links. I've put a rule in the robots.txt file to stop google from following any dynamic pages (with a ?) and also any ajax pages but the pages are still indexed on google. At the moment there is over 5000 pages which have been indexed which I don't want on there and I'm worried is causing issues with my rankings. Would a redirect rule work or could someone offer any advice? https://www.google.co.uk/search?q=site:outdoormegastore.co.uk+inurl:default&num=100&hl=en&safe=off&prmd=imvnsl&filter=0&biw=1600&bih=809#hl=en&safe=off&sclient=psy-ab&q=site:outdoormegastore.co.uk+inurl%3Aajax&oq=site:outdoormegastore.co.uk+inurl%3Aajax&gs_l=serp.3...194108.194626.0.194891.4.4.0.0.0.0.100.305.3j1.4.0.les%3B..0.0...1c.1.SDhuslImrLY&pbx=1&bav=on.2,or.r_gc.r_pw.r_qf.&fp=ff301ef4d48490c5&biw=1920&bih=860
Technical SEO | | gavinhoman0 -
De-indexing millions of pages - would this work?
Hi all, We run an e-commerce site with a catalogue of around 5 million products. Unfortunately, we have let Googlebot crawl and index tens of millions of search URLs, the majority of which are very thin of content or duplicates of other URLs. In short: we are in deep. Our bloated Google-index is hampering our real content to rank; Googlebot does not bother crawling our real content (product pages specifically) and hammers the life out of our servers. Since having Googlebot crawl and de-index tens of millions of old URLs would probably take years (?), my plan is this: 301 redirect all old SERP URLs to a new SERP URL. If new URL should not be indexed, add meta robots noindex tag on new URL. When it is evident that Google has indexed most "high quality" new URLs, robots.txt disallow crawling of old SERP URLs. Then directory style remove all old SERP URLs in GWT URL Removal Tool This would be an example of an old URL:
Technical SEO | | TalkInThePark
www.site.com/cgi-bin/weirdapplicationname.cgi?word=bmw&what=1.2&how=2 This would be an example of a new URL:
www.site.com/search?q=bmw&category=cars&color=blue I have to specific questions: Would Google both de-index the old URL and not index the new URL after 301 redirecting the old URL to the new URL (which is noindexed) as described in point 2 above? What risks are associated with removing tens of millions of URLs directory style in GWT URL Removal Tool? I have done this before but then I removed "only" some useless 50 000 "add to cart"-URLs.Google says themselves that you should not remove duplicate/thin content this way and that using this tool tools this way "may cause problems for your site". And yes, these tens of millions of SERP URLs is a result of a faceted navigation/search function let loose all to long.
And no, we cannot wait for Googlebot to crawl all these millions of URLs in order to discover the 301. By then we would be out of business. Best regards,
TalkInThePark0 -
Should I delete a page or remove links on a penalized page?
Hello All, If I have a internal page that has low quality links point to it or a penality. Can I just remove the page, and start over versus trying to remove the links? Over time wouldn't this page disapear along with the penalty on that page? Kinda like pruning a tree? Cutting off the junk limbs so other could grow stronger, or to start new fresh ones. Example: www.domain.com Penalized Internal Page: (Say this page is penalized due to keyword stuffing, and has low quality links pointing to it like blog comments, or profiles) www.domain.com/penalized-internal-page.com Would it be effective to just delete this page (www.domain.com/penalized-internal-page.com) and start over with a new page. New Internal Page: www.domain.com/new-internal-page.com I would of course lose any good links point to that page, but it might be easier then trying to remove old back links. Thoughts? Thanks! Pete
Technical SEO | | Juratovic0