Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Help FORUM ( User generated content ) SEO best practices
-
Hello Moz folks !
For the very first time im dealing with a massive community who rely on UGC ( user generated content ). Their forum is finding a great deal of duplicate content/broken link/ duplicate title and on-site issue. I have Advance SEO knowledge related to ecommerce or blogging but new to forum and UGC.
I would really love to learn or get ressources links that would allow me to see/understand the best practices in term of SEO. Any help is greatly appreciated.
Best,
Yan
-
do logged in-user and anonymous user should have the same behavior ?
For the most part, yes, however it depends on the forum you are running. The important piece to understand is that whatever is hidden behind a login wall, remains hidden to the search engines. So, you have to weigh that factor when deciding which content to display to everyone versus the content to display to only logged in users.
How do you suggest handling canonical in a UGC world ?
Canonicalization isn't too hard to manage. Your forum software should include canonical URLs, but if not you will want those implemented into the template as soon as possible. The use of the rel=prev and rel=next tags are highly recommended. This allows you to keep the main forum thread as the canonical URL and Google understands that the subsequent pages are related to the main page and how they add value.
Do you have specific editorial guidelines enforced on UGC ?
Again, that's up to you and your community. What work editorially for one forum may not be the most desirable for another (e.g. the use of profanity). As long as the content being added is of value, then I consider it good content. With forums, you can be a lot more loose with the guidelines and allow users to interact as they desire.
Don't let your forum become infested with Spam, obvious self-promoting threads, and make sure all links are nofollow. Many forums implement restrictions on users in regards to links and only when they prove themselves can they add links to their posts. Link and Spam management are very important for forums.
-
Thanks Ray-PP,
Is there any specific ? Exemple , do logged in-user and anonymous user should have the same behavior ? How do you suggest handling canonical in a UGC world ? Do you have specific editorial guidelines enforced on UGC ? Exemple should we noindex a post with a 3 word question and an image ?
Cheers,
Yan
-
Hello Yan,
Fortunately, the on-site SEO for UGC is not very different from the on-site SEO of other forms of content. We can still apply those best-practices to the forums and UGC you're experiencing in forums.
Duplicate content / on-page factors
- Make sure the forum is using proper canonicalization
- use of rel=prev/next for paginated threads
- Semantic SEO where appropriate
- Make sure to have all on-page SEO factors optimized (title, headings, images optimized, ect)
Broken links
- Use Moz or a tool like Screaming Frog SEO Spider to identify 404 pages. Redirect any important pages with a 301 to its nearest related page (save SEO authority for the important dead pages).
Are there more specific issues you are experiencing with the forum?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Same content, different languages. Duplicate content issue? | international SEO
Hi, If the "content" is the same, but is written in different languages, will Google see the articles as duplicate content?
Intermediate & Advanced SEO | | chalet
If google won't see it as duplicate content. What is the profit of implementing the alternate lang tag?Kind regards,Jeroen0 -
Faceted Navigation URLs Best Practices
Hi, We are developing new Products Pages with faceted filters. You can see it here: https://www.viatrading.com/wholesale-products/ We have a feature allowing to Order By and Group By, which alters the order of all products. There will also be the option to view Products as a table, which will contain same products but with different design and maybe slightly different content of each product. All this will happen without changing the URL, https://www.viatrading.com/all/ Is this the best practice? Thanks,
Intermediate & Advanced SEO | | viatrading10 -
Paragraphs/Tables for Content & SEO
Hi Does anyone know if Google prefers paragraphs over content in a table, or doesn't it make much difference?
Intermediate & Advanced SEO | | BeckyKey0 -
Help article / Knowledge base SEO consideration
Hi everyone, I am in the process of building the knowledge base for our SaaS product and I am afraid it could impact us negatively on the SEO side because of: Thin content on pages containing short answers to specific questions Keyword cannibalisation between some of our blog articles and the knowledge base articles I didn't find much on the impact of knowledge bases on SEO when I searched on Google. So I'm hoping we can use this thread to share a few thoughts and best practices on this topic. Below is a bit more details on the issues I face, any tips on how to address them would be most welcome. 1. Thin content: Some articles will have thin content by design: the H1 will be a specific question and there will be only 2 or 3 lines of text answering it in the article. I think creating a dedicated article per question is better than grouping 20 questions on one article from a UX point of view, because this will enable us to direct users more quickly to the answer when they use the live search function inside the software (help widget) or on the knowledge base (saves them the need to scrolling a long article to find the answer). Now the issue is that this will result in lots of pages with thin content. A workaround could be to have both a detailed FAQ style page with all the questions and answers, and individual articles for each question on top of that. The FAQ style page could be indexed in Google while the individual articles would have either a noIndex directive or a rel canonical to the FAQ style page. Have any of you faced similar issues when setting-up your knowledge base? Which approach would you recommend? 2.Keyword cannibalisation: There will be, to some extend, a level of keyword cannibalisation between our blog articles (which rank well) and some of the knowledge base articles. While we want both types of articles to appear in search, we don't want the "How to do XYZ" blog article containing practical tips to compete with the "How to do XYZ in the software" knowledge base article. Do you have any advice on how to achieve that? Having a specific Schema.org (or equivalent) type of markup to differentiate between the 2 types of articles would have been ideal but I couldn't find anything relating to help articles specifically when I searched.
Intermediate & Advanced SEO | | tbps0 -
What are the best practices for geo-targeting by sub-folders?
My domain is currently targeting the US, but I'm building out sub-folders that will need to geo-target France, England, and Spain. Each country will have it's own sub-folder, and professionally translated (domain.com/france). Other than the hreflang tags, what are other best practices I can implement? Can Google Webmaster tools geo-target by subfolder? Any suggestions would be appreciated. Thanks Justin
Intermediate & Advanced SEO | | Rhythm_Agency0 -
What is the best way to find related forums in your industry?
Hi Guys, Just wondering what is the best way to find forums in your industry?
Intermediate & Advanced SEO | | edward-may2 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Best Practice for Inter-Linking to CCTLD brand domains
Team, I am wondering what people recommend as best SEO practice to inter-link to language specific brand domains e.g. : amazon.com
Intermediate & Advanced SEO | | tomypro
amazon.de
amazon.fr
amazon.it Currently I have 18 CCTLDs for one brand in different languages (no DC). I am linking from each content page to each other language domain, providing a link to the equivalent content in a separate language on a different CCTLD doamin. However, with Google's discouragement of site-wide links I am reviewing this practice. I am tending towards making the language redirects on each page javascript driven and to start linking only from my home page to the other pages with optimized link titles. Anyone having any thoughts/opinions on this topic they are open to sharing? /Thomas0