Content that's behind CSS..
-
For content that's been loaded onto the page.. but it requires a click for it to be revealed.. as in a slider, or a tab, to save space or for a page's organization.. what are your thoughts on Google counting or weighting this content?
It would make sense for Google to give it partial or no weighting as if Google attributes the content to being there, its confusion for the user to land on the page and have to find it/click around to find it..
Sorry if this is an obvious question to SEOs.. I've always assumed as long as it was loaded, it'd be mostly counted.. but I'm beginning to doubt my assumption.
Thanks!
-
Thanks Kevin,.. and for the Guidelines quote. Very helpful!
-ash
-
Yes, still valuable but less valuable than unhidden ("importnat content"). If the hidden content is an extension of your core/important content, no worries. Do what is best for the user even if you have a concern with drilling down for additional info. This is from G's website Guidelines:
"Make your site's important content visible by default. Google is able to crawl HTML content hidden inside navigational elements such as tabs or expanding sections, however we consider this content less accessible to users, and believe that you should make your most important information visible in the default page view."
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Consolidating a Large Site with Duplicate Content
I will be restructuring a large website for an OEM. They provide products & services for multiple industries, and the product/service offering is identical across all industries. I was looking at the site structure and ran a crawl test, and learned they have a LOT of duplicate content out there because of the way they set up their website. They have a page in the navigation for “solution”, aka what industry you are in. Once that is selected, you are taken to a landing page, and from there, given many options to explore products, read blogs, learn about the business, and contact them. The main navigation is removed. The URL structure is set up with folders, so no matter what you select after you go to your industry, the URL will be “domain.com/industry/next-page”. The product offerings, blogs available, and contact us pages do not vary by industry, so the content that can be found on “domain.com/industry-1/product-1” is identical to the content found on “domain.com/industry-2/product-1” and so-on and so-forth. This is a large site with a fair amount of traffic because it’s a pretty substantial OEM. Most of their content, however, is competing with itself because most of the pages on their website have duplicate content. I won’t begin my work until I can dive in to their GA and have more in-depth conversations with them about what kind of activity they’re tracking and why they set up the website this way. However, I don’t know how strategic they were in this set up and I don’t think they were aware that they had duplicate content. My first thought would be to work towards consolidating the way their site is set up, so we don’t spread the link-equity of “product-1” content, and direct all industries to one page, and track conversion paths a different way. However, I’ve never dealt with a site structure of this magnitude and don’t want to risk messing up their domain authority, missing redirect or URL mapping opportunities, or ruin the fact that their site is still performing well, even though multiple pages have the same content (most of which have high page authority and search visibility). I was curious if anyone has dealt with this before and if they have any recommendations for tackling something like this?
On-Page Optimization | | cassy_rich0 -
Hi - How do you get rid of duplicate content that was accidentally created on a tag url? For example, when I published a new article, the content was duplicated on: /posts/tag/lead-generation/
the original article was created with: /posts/shippers-looking-for-freight-brokers/ How can I fix this so a new URL is not created every time I add a tag to a new posting?
On-Page Optimization | | treetopgrowthstrategy0 -
Duplicate Content, Same Company?
Hello Moz Community, I am doing work for a company and they have multiple locations. For example, examplenewyork.com, examplesanfrancisco.com, etc. They also have the same content on certain pages within each website. For example, examplenewyork.com/page-a has the same content as examplesanfrancisco.com/page-a Does this duplicate content negatively impact us? Or could we rank for each page within each location parameter (for example, people in new york search page-a would see our web page and people in san fran search page-a would see our web page)? I hope this is clear. Thanks, Cole
On-Page Optimization | | ColeLusby0 -
Google's Page Layout Algorithm Change
Hello Everyone, Google says they've implemented this change because they are answering the complaints of users who have to search for actual content after they've clicked on a result. They go on to say users want to see content right away. Now while most of this talk is about ads, I wonder if this will also apply to websites that are image and flash heavy above the fold with very little content. I am working on a few auto dealer sites where 99% of the content above the fold are flash banners and images. Below all of this noise you can find about 200 words of text talking about their dealerships. I'd love to know everyone's thoughts on this...Does the new page layout algorithm change apply to only ads or to images and flash as well? Thanks
On-Page Optimization | | wparlaman0 -
Checking Duplicate Content
Hi there, We are migrating to a new website, which we are writing lots of new content for the new website. The new website is hosted on a development site which is password protected and so on so that it cannot be indexed. What i would like to know is, how do i check for duplicate content issues out there on the world wide web with the dev site being password protected? Hope this makes sense. Kind Regards,
On-Page Optimization | | Paul780 -
Duplicate Content
We offer Wellness programs for dogs and cats. A lot of the information is the same except for specifics that relate to young vs. senior pets. I have these different pages: Senior Wellness Kitten Wellness Puppy Wellness Adult Wellness Can each page have approx. 75% of the same text? Or should I rewrite each page so the information (though the same) appears unique.
On-Page Optimization | | PMC-3120870 -
Another SEO's point of view
Hiya fellow SEO's I have been working on a site - www.hplmotors.co.uk and I must say it has become difficult due to flaws with the content management system . We are speaking with the web site makers to be able to add a unique title, description to all pages. I know what is wrong but I would also like some 2nd opinions on this and welcome any suggestions for the site. A burnt out seo 🙂 thanks
On-Page Optimization | | onlinemediadirect0