Content within a toggle, Juice or No Juice?
-
Greetings Mozzers,
I recently added a significant amount of information within a single page utilizing toggles to hide the content from a user and for them to see it they must click to reveal. Since technically the code is reading "display:none" to start, would that be considered "Black Hat" or "Not There" to crawlers? It isn't displayed in any sort of spammy way. It is more for the UX of the visitor that toggles were utilized.
Thoughts and advice is greatly appreciated!
-
Glad I could help. I would just run it through one or two more versions of that tool just for peace of mind. You can just google "spider view tool" and try out a couple others.
-
Thanks Marisa,
Looks like it shows up there. I appreciate the tip on that tool.
-
It definitely depends on how you're doing it. Without seeing it, I can't say for sure, but usually this method only hides things from the user, not the search engines. I'd recommend running your page through a service that shows you what the spiders see.
Try: http://www.iwebtool.com/spider_view
If you see your text in the results, you're probably safe.
One of the sites I designed, customwovenlabels.com, uses this with javascript. If you go there and look at the copy on the homepage, you will see a link that says, "read more." Google sees all the text and doesn't distinguish the difference between what's initially seen vs. hidden.
So if you discover that your method isn't ideal, there are always other alternatives.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Help finding website content scraping
Hi, I need a tool to help me review sites that are plagiarising / directly copying content from my site. But tools that I'm aware, such as Copyscape, appear to work with individual URLs and not a root domain. That's great if you have a particular post or page you want to check. But in this case, some sites are scraping 1000s of product pages. So I need to submit the root domain rather than an individual URL. In some cases, other sites are being listed in SERPs above or even instead of our site for product search terms. But so far I have stumbled across this, rather than proactively researched offending sites. So I want to insert my root domain & then for the tool to review all my internal site pages before providing information on other domains where an individual page has a certain amount of duplicated copy. Working in the same way as Moz crawls the site for internal duplicate pages - I need a list of duplicate content by domain & URL, externally that I can then contact the offending sites to request they remove the content and send to Google as evidence, if they don't. Any help would be gratefully appreciated. Terry
White Hat / Black Hat SEO | | MFCommunications0 -
Separating the syndicated content because of Google News
Dear MozPeople, I am just working on rebuilding a structure of the "news" website. For some reasons, we need to keep syndicated content on the site. But at the same time, we would like to apply for google news again (we have been accepted in the past but got kicked out because of the duplicate content). So I am facing the challenge of separating the Original content from Syndicated as requested by google. But I am not sure which one is better: *A) Put all syndicated content into "/syndicated/" and then Disallow /syndicated/ in robots.txt and set NOINDEX meta on every page. **But in this case, I am not sure, what will happen if we will link to these articles from the other parts of the website. We will waste our link juice, right? Also, google will not crawl these pages, so he will not know about no indexing. Is this OK for google and google news? **B) NOINDEX meta on every page. **Google will crawl these pages, but will not show them in the results. We will still loose our link juice from links pointing to these pages, right? So ... is there any difference? And we should try to put "nofollow" attribute to all the links pointing to the syndicated pages, right? Is there anything else important? This is the first time I am making this kind of "hack" so I am exactly sure what to do and how to proceed. Thank you!
White Hat / Black Hat SEO | | Lukas_TheCurious1 -
Buying quality domainname with juice
I run the SEO side for our company and the MOZ tools have been quite helpfull to track keywords and bump the rankings of certain pages by filtering the errors and filling missing tags like H1, H3 , meta etc. However, the website is an webshop which sells niche products, this makes getting quality backlinks quite a challenge and besides some forums and directories there is little I think I can do to get more quality backlinks without entering grey hat or even blackhat practices. There is not even blogs related to this niche in relevant language, otherwise we would send them samples of products so they would write about it. Recently ( 6 months ago, in SEO time this is ages I think) one of the competitors went bust and their domain name has become available for purchase. It's domainauthority ranks at 21/100 while our own stands at 20/100. now comes my question: If we would to purchase this domain, and do a 301 redirect, would it pass on the juice to our site? what else can I do to improve the ranking except the usual part like: titletags, H1, metatags, img alt, valuable text etc. what other ways are there to get quality backlinks in nichemarkets, I don't want to buy backlinks as I consider that as a shortterm solution and blackhat, and since 80% of our traffic comes organic from google, last thing I want is a penalty from our lord google
White Hat / Black Hat SEO | | sami800 -
Lots of websites copied my original content from my own website, what should I do?
1. Should I ask them to remove and replace the content with their unique and original content? 2. Should I ask them to link to the URL where the original content is located? 3. Should I use a tool to easily track these "copycat" sites and automatically add links from their site to my site? Thanks in advance!
White Hat / Black Hat SEO | | esiow20130 -
How does Google decide what content is "similar" or "duplicate"?
Hello all, I have a massive duplicate content issue at the moment with a load of old employer detail pages on my site. We have 18,000 pages that look like this: http://www.eteach.com/Employer.aspx?EmpNo=26626 http://www.eteach.com/Employer.aspx?EmpNo=36986 and Google is classing all of these pages as similar content which may result in a bunch of these pages being de-indexed. Now although they all look rubbish, some of them are ranking on search engines, and looking at the traffic on a couple of these, it's clear that people who find these pages are wanting to find out more information on the school (because everyone seems to click on the local information tab on the page). So I don't want to just get rid of all these pages, I want to add content to them. But my question is... If I were to make up say 5 templates of generic content with different fields being replaced with the schools name, location, headteachers name so that they vary with other pages, will this be enough for Google to realise that they are not similar pages and will no longer class them as duplicate pages? e.g. [School name] is a busy and dynamic school led by [headteachers name] who achieve excellence every year from ofsted. Located in [location], [school name] offers a wide range of experiences both in the classroom and through extra-curricular activities, we encourage all of our pupils to “Aim Higher". We value all our teachers and support staff and work hard to keep [school name]'s reputation to the highest standards. Something like that... Anyone know if Google would slap me if I did that across 18,000 pages (with 4 other templates to choose from)?
White Hat / Black Hat SEO | | Eteach_Marketing0 -
Same content, different target area SEO
So ok, I have a gambling site that i want to target for Australia, Canada, USA and England separately and still have .com for world wide (or not, read further).The websites content will basically stays the same for all of them, perhaps just small changes of layout and information order (different order for top 10 gambling rooms) My question 1 would be: How should I mark the content for Google and other search engines that it would not be considered "duplicate content"? As I have mentioned the content will actually BE duplicate, but i want to target the users in different areas, so I believe search engines should have a proper way not to penalize my websites for trying to reach the users on their own country TLDs. What i thought of so far is: 1. Separate webmasterstools account for every domain -> we will need to setup the user targeting to specific country in it.
White Hat / Black Hat SEO | | SEO_MediaInno
2. Use the hreflang tags to indicate, that this content is for GB users "en-GB" the same for other domains more info about it http://support.google.com/webmasters/bin/answer.py?hl=en&answer=189077
3. Get the country specific IP address (physical location of the server is not hugely important, just the IP)
4. It would be great if the IP address for co.uk is from different C-class than the one for the .com Is there anything I am missing here? Question 2: Should i target .com for USA market or is there some other options? (not based in USA so i believe .us is out of question) Thank you for your answers. T0 -
Does posting a source to the original content avoid duplicate content risk?
A site I work with allows registered user to post blog posts (longer articles). Often, the blog posts have been published earlier on the writer's own blog. Is posting a link to the original source a sufficient preventative solution to possibly getting dinged for duplicate content? Thanks!
White Hat / Black Hat SEO | | 945010 -
"take care about the content" is it always true?
Hi everyone, I keep reading answer ,in reference to ranking advice, in wich the verdict is always the same: "TAKE CARE ABOUT THE CONTENT INSTEAD OF PR", and phrases like " you don't have to waste your time buying links, you have first of all to engage your visitors. ideally it works but not when you have to deal with small sites and especially when you are going to be ranked for those keywords where there's not too much to write. i'll give you an example still unsolved: i've got a client who just want to be ranked first for his flagship store, now his site is on the fourth position and the first ranked is a site with no content and low authority but it has the excact keyword match domain. tell me!!! what kind of content should i produce in order to be ranked for the name of the shop and the city?? the only way is to get links.... or to stay forth..... if you would like to help me, see more details below: page: http://poltronafraubrescia.zenucchi.it keyword: poltrona frau brescia competitor ranked first: http://turra.poltronafraubrescia.it/ competiror ranked second: http:// poltronafraubrescia.com/
White Hat / Black Hat SEO | | guidoboem0