Glossary index and individual pages create duplicate content. How much might this hurt me?
-
I've got a glossary on my site with an index page for each letter of the alphabet that has a definition. So the M section lists every definition (the whole definition).
But each definition also has its own individual page (and we link to those pages internally so the user doesn't have to hunt down the entire M page).
So I definitely have duplicate content ... 112 instances (112 terms). Maybe it's not so bad because each definition is just a short paragraph(?)
How much does this hurt my potential ranking for each definition? How much does it hurt my site overall?
Am I better off making the individual pages no-index? or canonicalizing them?
-
Thanks, Ryan!
-
From here: http://moz.com/messages/write to Dirk's username: DC1611. There used to be a button in profiles, but it looks like it got shuffled in the redesign.
-
PM? Does Moz offer that function?
-
It's a bit difficult to assess which of the pages is more important without knowing the site. Having a lot of content is good - but if the only link between the content is that they all start with the same letter it could be pretty weak or pretty strong depending on the situation:
I'll give 2 examples :
Suppose that the index is on First names starting with S - in this case this page is a valuable one because a lot of people are searching for it - and the search volume is potentially bigger than the number of people that are looking for first name steve (= one specific item)
Suppose the index is about Illnesses starting with S - in this case the index page has very little value for a searcher, because people are searching illnesses based the symptoms -the fact that illnesses start with S doesn't link them together.
It could be helpful if you send me the actual url's via PM if you don't want to disclose them here.
rgds
Dirk
-
Oops. Sorry. Poor wording there. Meant to say ...
Definitely not concerned that the M index page and the M* definition** page BOTH show up in the search results.
We definitely do want at least one of the pages to not only show up in the rankings, but to rank highly. I'm guessing the M index page would actually have a chance of ranking high because it will have so many long tails related to our short-tail.
But it would seem weird to put a no-index on the M* definition** page ... since we have multiple internal links to those pages.
Thanks again for your patience. Really appreciate the feedback.
Steve
-
That's exactly what I am saying - your index page with all the definitions is from Google perspective completely different from the detailed definition page (the first one being much richer in content than the 2nd one). If getting these pages ranked is the least of concerns - you can keep it as it is. If you want to play on the safe side, you can put a noindex on the index page.
rgds,
Dirk
-
Just having a bit of a dilemma. Trying to make it easier for people who come to the glossary and then go to ... say ... the M page. Don't have to keep clicking away to see the definitions. Result: More user-friendly
But we also want to have a very specific definition page so that when we link from an article to the definition, the user doesn't have to see all of the M definitions. Result: More user-friendly.
Definitely not concerned that both the M index page and the M* definition** page show up in the search results. That would actually be swell. Just more concerned that our overall site ranking or domain authority will somehow suffer.
If you're saying that the M index page and the M* page** are dramatically different (because the M index page is much, much longer) and so I shouldn't worry, that's great. (Hope that's what you're saying.)
Thanks!
-
Hi,
As far as I understand it's not really a question of duplicate content in the SEO meaning. Although all the definitions starting with M are on the M-index page this page is quite different to the pages that contain the individual definitions of the terms that start with M.
A problem on many sites is that the pages that only contain the explanation of one term are very light in terms of content, and that the page with is listing all these terms is generally not very interesting from a user (and search perspective). I don't know your site, so difficult to assess if this is the case
You could make the index page noindex/follow - and just list the terms, linking to the explanation pages. For the explanation pages which are probably the most interesting for users & search engines: try to enrich them by adding more content, like links to articles on your site that use the term, or have more information on the term
Hope this helps,
Dirk
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate pages and Canonicals
Hi all, Our website has more than 30 pages which are duplicates. So canonicals have been deployed to show up only 10 of these pages. Do more of these pages impact rankings? Thanks
Intermediate & Advanced SEO | | vtmoz0 -
Google indexing only 1 page out of 2 similar pages made for different cities
We have created two category pages, in which we are showing products which could be delivered in separate cities. Both pages are related to cake delivery in that city. But out of these two category pages only 1 got indexed in google and other has not. Its been around 1 month but still only Bangalore category page got indexed. We have submitted sitemap and google is not giving any crawl error. We have also submitted for indexing from "Fetch as google" option in webmasters. www.winni.in/c/4/cakes (Indexed - Bangalore page - http://www.winni.in/sitemap/sitemap_blr_cakes.xml) 2. http://www.winni.in/hyderabad/cakes/c/4 (Not indexed - Hyderabad page - http://www.winni.in/sitemap/sitemap_hyd_cakes.xml) I tried searching for "hyderabad site:www.winni.in" in google but there also http://www.winni.in/hyderabad/cakes/c/4 this link is not coming, instead of this only www.winni.in/c/4/cakes is coming. Can anyone please let me know what could be the possible issue with this?
Intermediate & Advanced SEO | | abhihan0 -
Duplicated Content with Index.php
Good Afternoon, My website uses Joomla CMS and has the htaccess rewrite code enabled to ensure the use of search engine friendly URLs (SEF's). While browsing the crawl diagnostics I have found that Moz considers the /index.php URL a duplicate to our root. I will always under the impression that the htaccess rewrite took care of that issue and obviously I would like to address it. I attempted to create a 301 redirect from the index.php URL to the root but ran into an issue when attempting to login to the admin portion of the website as the redirect sent me back to the homepage. I was curious if anyone had advice for handling the index.php duplication issue, specifically with Joomla. Additionally, I have confirmed that in Google Webmasters, under URL parameters, the index.php parameter is set as 'Representative URL'.
Intermediate & Advanced SEO | | BrandonEML0 -
HELP! How does one prevent regional pages as being counted as "duplicate content," "duplicate meta descriptions," et cetera...?
The organization I am working with has multiple versions of its website geared towards the different regions. US - http://www.orionhealth.com/ CA - http://www.orionhealth.com/ca/ DE - http://www.orionhealth.com/de/ UK - http://www.orionhealth.com/uk/ AU - http://www.orionhealth.com/au/ NZ - http://www.orionhealth.com/nz/ Some of these sites have very similar pages which are registering as duplicate content, meta descriptions and titles. Two examples are: http://www.orionhealth.com/terms-and-conditions http://www.orionhealth.com/uk/terms-and-conditions Now even though the content is the same, the navigation is different since each region has different product options / services, so a redirect won't work since the navigation on the main US site is different from the navigation for the UK site. A rel=canonical seems like a viable option, but (correct me if I'm wrong) it tells search engines to only index the main page, in this case, it would be the US version, but I still want the UK site to appear to search engines. So what is the proper way of treating similar pages accross different regional directories? Any insight would be GREATLY appreciated! Thank you!
Intermediate & Advanced SEO | | Scratch_MM0 -
Urgent Site Migration Help: 301 redirect from legacy to new if legacy pages are NOT indexed but have links and domain/page authority of 50+?
Sorry for the long title, but that's the whole question. Notes: New site is on same domain but URLs will change because URL structure was horrible Old site has awful SEO. Like real bad. Canonical tags point to dev. subdomain (which is still accessible and has robots.txt, so the end result is old site IS NOT INDEXED by Google) Old site has links and domain/page authority north of 50. I suspect some shady links but there have to be good links as well My guess is that since that are likely incoming links that are legitimate, I should still attempt to use 301s to the versions of the pages on the new site (note: the content on the new site will be different, but in general it'll be about the same thing as the old page, just much improved and more relevant). So yeah, I guess that's it. Even thought the old site's pages are not indexed, if the new site is set up properly, the 301s won't pass along the 'non-indexed' status, correct? Thanks in advance for any quick answers!
Intermediate & Advanced SEO | | JDMcNamara0 -
Keep older blog content indexed or no?
Our really old blog content still sees traffic, but engagement metrics aren't the best (little time on site), and as a result, traffic has gradually started to decrease. Should we de-index it?
Intermediate & Advanced SEO | | nicole.healthline0 -
Is this duplicate content something to be concerned about?
On the 20th February a site I work on took a nose-dive for the main terms I target. Unfortunately I can't provide the url for this site. All links have been developed organically so I have ruled this out as something which could've had an impact. During the past 4 months I've cleaned up all WMT errors and applied appropriate redirects wherever applicable. During this process I noticed that mydomainname.net contained identical content to the main mydomainname.com site. Upon discovering this problem I 301 redirected all .net content to the main .com site. Nothing has changed in terms of rankings since doing this about 3 months ago. I also found paragraphs of duplicate content on other sites (competitors in different countries). Although entire pages haven't been copied there is still enough content to highlight similarities. As this content was written from scratch and Google would've seen this within it's crawl and index process I wanted to get peoples thoughts as to whether this is something I should be concerned about? Many thanks in advance.
Intermediate & Advanced SEO | | bfrl0 -
"Duplicate" Page Titles and Content
Hi All, This is a rather lengthy one, so please bear with me! SEOmoz has recently crawled 10,000 webpages from my site, FrenchEntree, and has returned 8,000 errors of duplicate page content. The main reason I have so many is because of the directories I have on site. The site is broken down into 2 levels of hierachy. "Weblets" and "Articles". A weblet is a landing page, and articles are created within these weblets. Weblets can hold any number of articles - 0 - 1,000,000 (in theory) and an article must be assigned to a weblet in order for it to work. Here's how it roughly looks in URL form - http://www.mysite.com/[weblet]/[articleID]/ Now; our directory results pages are weblets with standard content in the left and right hand columns, but the information in the middle column is pulled in from our directory database following a user query. This happens by adding the query string to the end of the URL. We have 3 main directory databases, but perhaps around 100 weblets promoting various 'canned' queries that users may want to navigate straight into. However, any one of the 100 directory promoting weblets could return any query from the parent directory database with the correct query string. The problem with this method (as pointed out by the 8,000 errors) is that each possible permutation of search is considered to be it's own URL, and therefore, it's own page. The example I will use is the first alphabetically. "Activity Holidays in France": http://www.frenchentree.com/activity-holidays-france/ - This link shows you a results weblet without the query at the end, and therefore only displays the left and right hand columns as populated. http://www.frenchentree.com/activity-holidays-france/home.asp?CategoryFilter= - This link shows you the same weblet with the an 'open' query on the end. I.e. display all results from this database. Listings are displayed in the middle. There are around 500 different URL permutations for this weblet alone when you take into account the various categories and cities a user may want to search in. What I'd like to do is to prevent SEOmoz (and therefore search engines) from counting each individual query permutation as a unique page, without harming the visibility that the directory results received in SERPs. We often appear in the top 5 for quite competitive keywords and we'd like it to stay that way. I also wouldn't want the search engine results to only display (and therefore direct the user through to) an empty weblet by some sort of robot exclusion or canonical classification. Does anyone have any advice on how best to remove the "duplication" problem, whilst keeping the search visibility? All advice welcome. Thanks Matt
Intermediate & Advanced SEO | | Horizon0