How do i prevent Google and Moz from counting pages as duplicates?
-
I have 130,000 profiles on my site. When not Connected to them they have very few differences. So a bot - not logged in, etc, will see a login form and "Connect to Profilename"
MOZ and Google call the links the same, even though theyre unique such as
example.com/id/328/name-of-this-group
example.com/id/87323/name-of-a-different-group
So how do i separate them? Can I use Schema or something to help identify that these are profile pages, or that the content on them should be ignored as its help text, etc?
Take facebook - each facebook profile for a name renders simple results:
https://www.facebook.com/public/John-Smith
https://www.facebook.com/family/Smith/
Would that be duplicate data if facebook had a "Why to join" article on all of those pages?
-
What about this idea:
We can flesh out profiles with Data, demographics, and contact info. No one cares about it, so we leave it off.
We can also customize it by a list of names that are connected, for those that have registrants
So 2 options: throw the demo info up on each, giving some unique content.
and or
Throw up member first names last init of those registered in them, then only index them if they have members?
However, 80% of our traffic comes from these "duplicate" pages.
-
Yes - we need the directories to be found in google
These profile pages are places in an organization to register at. Our brand name contains 3 utterly generic words, so the only thing showing up on radar are these profile names
Of course, removing it is a "solution" but no one hands a fat person a butcher knife and says "just cut it off"
I need to shape the content to be unique. I think its our "pitch" text that has more characters than the profile.
-
Yes, adding noindex to all profile pages will solev any current or future issues you might have. There is no point in having those pages into the index if the "actual" content is anyway invisibile. There is no point keeping over 100k pages in the index with only boilerplate on them.
You should no index all profiles asap - there is no value from an organic point of view there and if you do lose some traffic from those - the risk - reward (risk - losing some traffic / reward - keeping your domain overall safe) is without question towards the reward side.
Cheers.
-
One solution would be to not index the directory that has the profiles. Do you get many visits from organic search to these pages?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Ecommerce Site - Duplicate product descriptions & SKU pages
Hi I have a couple of questions regarding the best way to optimise SKU pages on a large ecommerce site. At the moment we have 2 landing pages per product - one is the primary landing page with no SKU, the other includes the SKU in the URL so our sales people & customers can find it when using the search facility on the site. The SKU landing page has a canonical pointing to the primary page as they're duplicates. Is this the best way? Or is it better to have the one page with the SKU in the URL? Also, we have loads of products with the very similar product descriptions, I am working on trying to include a unique paragraph or few sentences on these to improve the content - how dangerous is the duplicate content within your own site? I know its best to have totally unique content, but it won't be possible on a site with thousands of products and a small team. At the moment I am trying to prioritise the products to update. Thank you 🙂
Intermediate & Advanced SEO | | BeckyKey0 -
Duplicate Pages #!
Hi guys, Currently have duplicate pages accross a website e.g. https://archierose.com.au/shop/cart**#!** https://archierose.com.au/shop/cart The only difference is the URL 1 has a hashtag and exclamation tag. Everything else is the same. We were thinking of adding rel canonical tags on the #! versions of the page to the correct URLs. But Google doens't seem to be indexing the #! versions anyway. Does anyone know why this is the case? If Google is not indexing them, is there any point adding rel canonical tags? Cheers, Chris https://archierose.com.au/shop/cart#!
Intermediate & Advanced SEO | | jayoliverwright0 -
Duplicate content - how to diagnose duplicate content from another domain before publishing pages?
Hi, 🙂 My company is having new distributor contract, and we are starting to sell products on our own webshop. Bio-technology is an industry in question and over 1.000 products. Writing product description from scratch would take many hours. The plan is to re-write it. With permission from our contractors we will import their 'product description' on our webshop. But, I am concerned being penalies from Google for duplicate content. If we re-write it we should be fine i guess. But, how can we be sure? Is there any good tool for comparing only text (because i don't want to publish the pages to compare URLs)? What else should we be aware off beside checking 'product description' for duplicate content? Duplicate content is big issue for all of us, i hope this answers will be helpful for many of us. Keep it hard work and thank you very much for your answers, Cheers, Dusan
Intermediate & Advanced SEO | | Chemometec0 -
How would Google reach internal pages on Zales with Lazy Load?
Hi, I encountered the following page on Zales:
Intermediate & Advanced SEO | | BeytzNet
http://engagementring.theprestigediamondcollection.com/NewEngagementRing/NewEring.aspx As you scroll down more items pop up (the well known Pinterest style).
Would Google bot be able to enter the product pages? I don't assume the bot "scrolls"... Thanks0 -
Duplicate peices of content on multiple pages - is this a problem
I have a couple of WordPress clients with the same issue but caused in different ways: 1. The Slash WP theme which is a portfolio theme, involves setting up multiple excerpts of content that can then be added to multiple pages. So although the pages themselves are not identical, there are the same snippets of content appearing on multiple pages 2. A WP blog which has multiple categories and/or tags for each post, effectively ends up with many pages showing duplicate excerpts of content. My view has always been to noindex these pages (via Yoast), but was advised recently not to. In both these cases, even though the pages are not identical, do you think this duplicate content across multiple pages could cause an issue? All thoughts appreciated
Intermediate & Advanced SEO | | Chammy0 -
Duplicate Page Title problems with Product Catalogues (Categories, Subcategories etc.)
Hey guys, I've done a fair bit of Googling and "mozzing" and can't seem to find a definitive solution. In our product catalogue on our site we have multiple ways to access the product for navigation purposes, and SeoMoz is throwing up hundreds of duplicate page title errors which are basically just different ways to get to the same product yet it sees it as a "separate page" and thus duplicating itself. Is this just SeoMoz confusing itself or does Google actually see it this way too? For example, a product might be: www.example.com/region/category/subcategory/ www.example.com/region2/category/subcategory/ www.example.com/region/category/subcategory2/ etc. Is the only solution to have the product ONLY listed in one combination? This kind of kills our ability to have easy refinement for customers browsing the catalogue, i.e: something that falls under the "Gifts for Men" might also be a match for "Father's Day Gifts" or "Gifts for Dad" etc. Any solution or advice is greatly appreciated, cheers 🙂
Intermediate & Advanced SEO | | ExperienceOz0 -
How to prevent Google from crawling our product filter?
Hi All, We have a crawler problem on one of our sites www.sneakerskoopjeonline.nl. On this site, visitors can specify criteria to filter available products. These filters are passed as http/get arguments. The number of possible filter urls is virtually limitless. In order to prevent duplicate content, or an insane amount of pages in the search indices, our software automatically adds noindex, nofollow and noarchive directives to these filter result pages. However, we’re unable to explain to crawlers (Google in particular) to ignore these urls. We’ve already changed the on page filter html to javascript, hoping this would cause the crawler to ignore it. However, it seems that Googlebot executes the javascript and crawls the generated urls anyway. What can we do to prevent Google from crawling all the filter options? Thanks in advance for the help. Kind regards, Gerwin
Intermediate & Advanced SEO | | footsteps0