Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Using a lot of "Read More" Hidden text
-
My site has a LOT of "read more" and when a user click they will see a lot of text. "read more" is dark blue bold and clear to the user. It is the perfect for the user experience, since right below I have pictures and videos which is what most users want.
Question: I expect few users will click "Read more" (however, some users will appreciate chance to read and learn more) and I wonder if search engines may think I am hiding text and this is a risky approach or simply discount the text as having zero value from an SEO perspective?
Or, equally important: If the text was NOT hidden with a "Read more" would the text actually carry more SEO value than if it is hidden under a "read more" even though users will NOT read the text anyway? If yes, reason may be: when the text is not hidden, search engines cannot see that users are not reading it and the text carry more weight from an SEO perspective than pages where text is hidden under a "Read more" where users rarely click "read more".
-
Hi khi5
I analyzed your page. You are doing just fine. you are using CSS display none. You are not doing any cloaking.
You are doing the right thing.
1. not fooling google
2.not fooling user
3.giving the user a better user experience.
Don't worry you are not applying any "black hat" technique. You will not get penalized.
-
thx. Anirban. I am not a programmer, so would you be able to tell me if this approach seems right: http://www.honoluluhi5.com/oahu/honolulu-condos/ - I don't know if css or display none.
I can't think of a better layout for that page and hiding text the way I have done it is ideal for users. If I show more text, surely bounce rate would go up!
-
It used by many huge sites is to pre-load code, navigation, or content in the background so that it can be dynamically displayed as needed. The most common technique for accomplishing this is through the use of the CSS display: none attribute.
Unfortunately, you can also use display: none to simply hide text. This is where the perceived problem comes in. People worry that the use of display: none to hide content(and show when user asks for it) or for code that is really meant for screen readers can lead them into trouble. The legitimate use of this technique is so prevalent that I would rarely expect search engines to penalize a site for using the display: none attribute. It’s just very difficult to implement an algorithm that could truly ferret out whether the particular use of display: none is meant to deceive the search engines or not.
I usually use this tactics to make the page more user friendly and it is useful for the user too. User don't get bombarded by a large content piece and I am not fooling the user/google. I am giving the option to the user to read more if he wants to.
"display: none"
What it does :- the functionality is same - when user clicks "read more" it opens and when user click "less" it closes.
How it defeats the "cloaking" idea:- When google crawls your page where the full content is there (text based browser, not java enabled) and when user sees the page there is a "read more" link and by clicking it it shows the full content. So you are not showing two different things to google & user. it solves the problem.there shouldn't be a a cloaking problem. Its tested.
Hope this helps...
Also refer :- http://moz.com/community/q/would-using-display-none-to-hide-a-section-of-text-effect-seo-negatively
-
Wow -- thanks for the links! I learn something new every day.
I'll defer to others on your specific question since I haven't ever worked with sites that specifically do what you do. I hope someone will give you a good answer!
-
http://searchengineland.com/googles-matt-cutts-on-hidden-text-using-expandable-sections-youll-be-in-good-shape-167753 - this is a more Matt Cutts video and more relevant, which again mentions it is OK to use those read more.
Again, my bigger concern is if it is OK, or I am probably safer off showing all text if possible….
-
thx, Sam. Here is a video from Matt Cutts: https://www.youtube.com/watch?v=UpK1VGJN4XY - it appears Google is OK with hidden text that makes sense for user.
For my site I have a lot of read more types like here:
http://www.honoluluhi5.com/oahu-condos/
http://www.honoluluhi5.com/oahu/honolulu-city-real-estate/As you can see from those 2 links, I have created with only the user in mind and nothing else. In order to play it safe, maybe I should just show all the text somehow, even though it compromises user experience.
-
The answer to your question lies in another question: Do search engines see one thing and users see another? If the answer is "yes," then you are using "cloaking" -- which is a very bad black-hat SEO technique. It can get you penalized and possibly banned.
Users don't see the text if they don't click "read more" but search engines will see the text either way? That's cloaking. I'd stop doing this right away.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What IP Address does Googlebot use to read your site when coming from an external backlink?
Hi All, I'm trying to find more information on what IP address Googlebot would use when arriving to crawl your site from an external backlink. I'm under the impression Googlebot uses international signals to determine the best IP address to use when crawling (US / non-US) and then carries on with that IP when it arrives to your website? E.g. - Googlebot finds www.example.co.uk. Due to the ccTLD, it decides to crawl the site with a UK IP address rather than a US one. As it crawls this UK site, it finds a subdirectory backlink to your website and continues to crawl your website with the aforementioned UK IP address. Is this a correct assumption, or does Googlebot look at altering the IP address as it enters a backlink / new domain? Also, are ccTLDs the main signals to determine the possibility of Google switching to an international IP address to crawl, rather than the standard US one? Am I right in saying that hreflang tags don't apply here at all, as their purpose is to be used in SERPS and helping Google to determine which page to serve to users based on their IP etc. If anyone has any insight this would be great.
Intermediate & Advanced SEO | | MattBassos0 -
SEO on Jobs sites: how to deal with expired listings with "Google for Jobs" around
Dear community, When dealing with expired job offers on jobs sites from a SEO perspective, most practitioners recommend to implement 301 redirects to category pages in order to keep the positive ranking signals of incoming links. Is it necessary to rethink this recommendation with "Google for Jobs" is around? Google's recommendations on how to handle expired job postings does not include 301 redirects. "To remove a job posting that is no longer available: Remove the job posting from your sitemap. Do one of the following: Note: Do NOT just add a message to the page indicating that the job has expired without also doing one of the following actions to remove the job posting from your sitemap. Remove the JobPosting markup from the page. Remove the page entirely (so that requesting it returns a 404 status code). Add a noindex meta tag to the page." Will implementing 301 redirects the chances to appear in "Google for Jobs"? What do you think?
Intermediate & Advanced SEO | | grnjbs07175 -
Combining images with text as anchor text
Hello everyone, I am working to create sub-category pages on our website virtualsheetmusic.com, and I'd like to have your thoughts on using a combination of images and text as anchor text in order to maximize keyword relevancy. Here is an example (I'll keep it simple): Let's take our violin sheet music main category page located at /violin/, which includes the following sub-categories: Christmas Classical Traditional So, the idea is to list the above sub-categories as links on the main violin sheet music page, and if we had to use simple text links, that would be something like: Christmas
Intermediate & Advanced SEO | | fablau
Classical
Traditional Now, since what we really would like to target are keywords like: "christmas violin sheet music" "classical violin sheet music" "traditional violin sheet music" I would be tempted to make the above links as follows: Christmas violin sheet music
Classical violin sheet music
Traditional violin sheet music But I am sure that would be too much overwhelming for the users, even if the best CSS design were applied to it. So, my idea would be to combine images with text, in a way to put those long-tail keywords inside the image ALT tag, so to have links like these: Christmas
Classical
Traditional That would allow a much easier way to work the UI , and at the same time keep relevancy for each link. I have seen some of our competitors doing that and they have top-notch results on the SEs. My questions are: 1. Do you see any negative effect of doing this kind of links from the SEO standpoint? 2. Would you suggest any better way to accomplish what I am trying to do? I am eager to know your thoughts about this. Thank you in advance to anyone!1 -
Best way to "Prune" bad content from large sites?
I am in process of pruning my sites for low quality/thin content. The issue is that I have multiple sites with 40k + pages and need a more efficient way of finding the low quality content than looking at each page individually. Is there an ideal way to find the pages that are worth no indexing that will speed up the process but not potentially harm any valuable pages? Current plan of action is to pull data from analytics and if the url hasn't brought any traffic in the last 12 months then it is safe to assume it is a page that is not beneficial to the site. My concern is that some of these pages might have links pointing to them and I want to make sure we don't lose that link juice. But, assuming we just no index the pages we should still have the authority pass along...and in theory, the pages that haven't brought any traffic to the site in a year probably don't have much authority to begin with. Recommendations on best way to prune content on sites with hundreds of thousands of pages efficiently? Also, is there a benefit to no indexing the pages vs deleting them? What is the preferred method, and why?
Intermediate & Advanced SEO | | atomiconline0 -
Using "nofollow" internally can help with crawl budget?
Hello everyone. I was reading this article on semrush.com, published the last year, and I'd like to know your thoughts about it: https://www.semrush.com/blog/does-google-crawl-relnofollow-at-all/ Is that really the case? I thought that Google crawls and "follows" nofollowed tagged links even though doesn't pass any PR to the destination link. If instead Google really doesn't crawl internal links tagged as "nofollow", can that really help with crawl budget?
Intermediate & Advanced SEO | | fablau0 -
Dilemma about "images" folder in robots.txt
Hi, Hope you're doing well. I am sure, you guys must be aware that Google has updated their webmaster technical guidelines saying that users should allow access to their css files and java-scripts file if it's possible. Used to be that Google would render the web pages only text based. Now it claims that it can read the css and java-scripts. According to their own terms, not allowing access to the css files can result in sub-optimal rankings. "Disallowing crawling of Javascript or CSS files in your site’s robots.txt directly harms how well our algorithms render and index your content and can result in suboptimal rankings."http://googlewebmastercentral.blogspot.com/2014/10/updating-our-technical-webmaster.htmlWe have allowed access to our CSS files. and Google bot, is seeing our webapges more like a normal user would do. (tested it in GWT)Anyhow, this is my dilemma. I am sure lot of other users might be facing the same situation. Like any other e commerce companies/websites.. we have lot of images. Used to be that our css files were inside our images folder, so I have allowed access to that. Here's the robots.txt --> http://www.modbargains.com/robots.txtRight now we are blocking images folder, as it is very huge, very heavy, and some of the images are very high res. The reason we are blocking that is because we feel that Google bot might spend almost all of its time trying to crawl that "images" folder only, that it might not have enough time to crawl other important pages. Not to mention, a very heavy server load on Google's and ours. we do have good high quality original pictures. We feel that we are losing potential rankings since we are blocking images. I was thinking to allow ONLY google-image bot, access to it. But I still feel that google might spend lot of time doing that. **I was wondering if Google makes a decision saying, hey let me spend 10 minutes for google image bot, and let me spend 20 minutes for google-mobile bot etc.. or something like that.. , or does it have separate "time spending" allocations for all of it's bot types. I want to unblock the images folder, for now only the google image bot, but at the same time, I fear that it might drastically hamper indexing of our important pages, as I mentioned before, because of having tons & tons of images, and Google spending enough time already just to crawl that folder.**Any advice? recommendations? suggestions? technical guidance? Plan of action? Pretty sure I answered my own question, but I need a confirmation from an Expert, if I am right, saying that allow only Google image access to my images folder. Sincerely,Shaleen Shah
Intermediate & Advanced SEO | | Modbargains1 -
Pipe ("|") in my website's title is being replaced with ":" in Google results
Hi , One of the websites I'm promoting and working on is www.pau-brasil.co.il.
Intermediate & Advanced SEO | | Kadel
It's wordpress-based website and as you can see the html's Title is "PauBrasil | some hebrew slogan".
(Screenshot: http://i.imgur.com/2f80EEY.gif)
When I'm searching for "PauBrasil" (Which is the brand's name) , one of the results google shows is "PauBrasil: Some Hebrew Slogan" (Screenshot: http://i.imgur.com/eJxNHrO.gif ) Why does the pipe is being replaced with ":" ?
And not just that , as you can see there's a "blank space" missing between the the ":" to the slogan.
(note: the websites has been indexed by google crawler at least 4 times so I find it hard to believe it can be the reason) I've keep on looking and found out that there's another page in that website with the exact same title
but when I'm looking for it in google , it shows the title as it really is , with pipe. ("|").
(Screenshot: http://i.imgur.com/dtsbZV2.gif) Have you ever encountered something like that?
Can it be that the duplicated title cause that weird "replacement"? Thanks in advance,
Kadel0