Question About Thin Content
-
Hello,
We have an encyclopedia type page on our e-commerce site. Basically, it's a page with a list of terms related to our niche, product definitions, slang terms, etc.
The terms on the encyclopedia page are each linked to their own page that contains the term and a very short definition (about 1-2 sentences).
The purpose of these is to link them on product pages if a product has a feature or function that may be new to our customers.
We have about 82 of these pages. Are these pages more likely to help us because they're providing information to visitors, or are they likely to hurt us because of the very small amount of content on each page?
Thanks for the help!
-
Thank you EGOL!
-
I would be afraid of 82 pages with a sentence or two. I would take one of two routes if this was my site...
A) Beef up each of these pages to one or two photos and one or two paragraphs, at least 100 words total.
B) Place all 82 of the definitions on a single big page that has on-page anchors so that links from other parts of your site can point straight to the definition.
We have an industry glossary on one of our sites with a few thousand terms. Most terms are accompanied by one photo and 50 to 200 words of text. We don't have individual pages for each term. Instead we have 26 pages, one for each letter of the alphabet. Some of these pages have over 100 terms. For several hundred of these terms we also have a substantive article with 500 to 5000 words and numerous photos. So, the bold term in the glossary links to the substantive article page.
Back in the early 2000s, the glossary was a good source of links and it got a lot of traffic. The value of the glossary for attracting links and traffic has declined over time. The value of the article collection has grown.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Page content not being recognised?
I moved my website from Wix to Wordpress in May 2018. Since then, it's disappeared from Google searches. The site and pages are indexed, but no longer ranking. I've just started a Moz campaign, and most pages are being flagged as having "thin content" (50 words or less), when I know that there are 300+ words on most of the pages. Looking at the page source I find this bit of code: page contents Does this mean that Google is finding this and thinks that I have only two words (page contents) on the page? Or is this code to grab the page contents from somewhere else in the code? I'm completely lost with this and would appreciate any insight.
Technical SEO | | Photowife1 -
SEO Content Audits Questions (Removing pages from website, extracting data, organizing data).
Hi everyone! I have a few questions - we are running an SEO content audit on our entire website and I am wondering the best FREE way to extract a list of all indexed pages. Would I need to use a mix of Google Analytics, Webmaster Tools, AND our XML sitemap or could I just use Webmaster Tools to pull the full list? Just want to make sure I am not missing anything. As well, once the data is pulled and organized (helpful to know the best way to pull detailed info about the pages as well!) I am wondering if it would be a best practice to sort by high trafficked pages in order to rank them for prioritization (ie: pages with most visits will be edited and optimized first). Lastly, I am wondering what constitutes a 'removable' page. For example, when it is appropriate to fully remove a page from our website? I understand that it is best, if you need to remove a page, to redirect the person to another similar page OR the homepage. Is this the best practice? Thank you for the help! If you say it is best to organize by trafficked pages first in order to optimize them - I am wondering if it would be an easier process to use MOZ tools like Keyword Explorer, Page Optimization, and Page Authority to rank pages and find ways to optimize them for best top relevant keywords. Let me know if this option makes MORE sense than going through the entire data extraction process.
Technical SEO | | PowerhouseMarketing0 -
Duplicate Page Content Issue
Hello, I recently solved www / no www duplicate issue for my website, but now I am in trouble with duplicate content again. This time something that I cannot understand happens: In Crawl Issues Report, I received Duplicate Page Content for http://yourappliancerepairla.com (DA 19) http://yourappliancerepairla.com/index.html (DA 1) Could you please help me figure out what is happenning here? By default, index.html is being loaded, but this is the only index.html I have in the folder. And it looks like the crawler sees two different pages with different DA... What should I do to handle this issue?
Technical SEO | | kirupa0 -
How different should content be so that it is not considered duplicate?
I am making a 2nd website for the same company. The name of the company, our services, keywords and contact info will show up several times within the text of both websites. The overall text and paragraphs will be different but some info may be repeated on both sites. Should I continue this? What precautions should I take?
Technical SEO | | savva0 -
Questionable SEO
Chess Telecom appears first when you search for 'business phone lines' in the UK so I used a campaign to check them out. It seems they've got tons of unrelated links and using comment spamming to increase their ranking. Along with fake twitter accounts and other things. Search for 'jewel jubic chess' and you'll see what i mean. I assumed this wasnt a good idea and been trying to get my link on relevant websites only. Any comments or suggestions? Should I simply trust that google will hopefully punish them eventually? Or should I be fighting fire with fire? Thanks Dan
Technical SEO | | DanFromUK0 -
Entry based content and SEO
My E-commerce team is implementing functionality that allows us to display different content based on what channel and even what keyword the customers used to reach our page. This is of course a move that we believe will strengthen our conversion rates, but how will this effect our organic search listings? Do you guys have any examples of how this could affect us, and are there any technology pitfalls that we absolutely need to know about?
Technical SEO | | GEMoney_No0 -
The Bible and Duplicate Content
We have our complete set of scriptures online, including the Bible at http://lds.org/scriptures. Users can browse to any of the volumes of scriptures. We've improved the user experience by allowing users to link to specific verses in context which will scroll to and highlight the linked verse. However, this creates a significant amount of duplicate content. For example, these links: http://lds.org/scriptures/nt/james/1.5 http://lds.org/scriptures/nt/james/1.5-10 http://lds.org/scriptures/nt/james/1 All of those will link to the same chapter in the book of James, yet the first two will highlight the verse 5 and verses 5-10 respectively. This is a good user experience because in other sections of our site and on blogs throughout the world webmasters link to specific verses so the reader can see the verse in context of the rest of the chapter. Another bible site has separate html pages for each verse individually and tends to outrank us because of this (and possibly some other reasons) for long tail chapter/verse queries. However, our tests indicated that the current version is preferred by users. We have a sitemap ready to publish which includes a URL for every chapter/verse. We hope this will improve indexing of some of the more popular verses. However, Googlebot is going to see some duplicate content as it crawls that sitemap! So the question is: is the sitemap a good idea realizing that we can't revert back to including each chapter/verse on its own unique page? We are also going to recommend that we create unique titles for each of the verses and pass a portion of the text from the verse into the meta description. Will this perhaps be enough to satisfy Googlebot that the pages are in fact unique? They certainly are from a user perspective. Thanks all for taking the time!
Technical SEO | | LDS-SEO0 -
How do I combat content theft?
A new site popped up that has completely replicated a site own by my client. This site is literally a copycat, scraped all the content, and copied the design down to the colors. I've already reported the site to the hosting provider and filled a spam report on Google. I noticed that the author changed some of the text, and internal links so that they don't link to our site anymore. Some of these were missed. I'm also going to take a couple preventative actions like change stuff in .htaccess, but that doesn't help me now, just in case it happens again in the future. I'm wondering what else i can or should be doing?
Technical SEO | | flowsimple0