Duplicate Page content | What to do?
-
Hello Guys,
I have some duplicate pages detected by MOZ. Most of the URL´s are from a registracion process for users, so the URL´s are all like this:
www.exemple.com/user/login?destination=node/125%23comment-form
What should I do? Add this to robot txt? If so how? Whats the command to add in Google Webmaster?
Thanks in advance!
Pedro Pereira
-
Hi Carly,
It needs to be done to each of the pages. In most cases, this is just a minor change to a single page template. Someone might tell you that you can add an entry to robots.txt to solve the problem, but that won't remove them from the index.
Looking at the links you provided, I'm not convinced you should deindex them all - as these are member profile pages which might have some value in terms of driving organic traffic and having unique content on them. That said I'm not party to how your site works, so this is just an observation.
Hope that helps,
George
-
Hi George,
I am having a similar issue with my site, and was looking for a quick clarification.
We have several "member" pages that have been created as a part of registration (thousands) and they are appearing as duplicate content. When you say add noindex and and a canonical, is this something that needs to be done to every individual page or is there something that can be done that would apply to the thousands of pages at once?
Here are a couple of examples of what the pages look like:
http://loyalty360.org/me/members/8003
http://loyalty360.org/me/members/4641
Thank you!
-
1. If you add just noindex, Google will crawl the page, drop it from the index but it will also crawl the links on that page and potentially index them too. It basically passes equity to links on the page.
2. If you add nofollow, noindex, Google will crawl the page, drop it from the index but it will not crawl the links on that page. So no equity will be passed to them. As already established, Google may still put these links in the index, but it will display the standard "blocked" message for the page description.
If the links are internal, there's no harm in them being followed unless you're opening up the crawl to expose tons of duplicate content that isn't canonicalised.
noindex is often used with nofollow, but sometimes this is simply due to a misunderstanding of what impact they each have.
George
-
Hello,
Thanks for your response. I have learn more which is great
My question is should I add a noindex only to that page or a noidex, nofolow?
Thanks!
-
Yes it's the worst possible scenario that they basically get trapped in SERPs. Google won't then crawl them until you allow the crawling, then set noindex (to remove from SERPS) and then add nofollow,noindex back on to keep them out of SERPs and to stop Google following any links on them.
Configuring URL parameters again is just a directive regarding the crawl and doesn't affect indexing status to the best of my knowledge.
In my experience, noindex is bulletproof but nofollow / robots.txt is very often misunderstood and can lead to a lot of problems as a result. Some SEOs think they can be clever in crafting the flow of PageRank through a site. The unsurprising reality is that Google just does what it wants.
George
-
Hi George,
Thanks for this, It's very interesting... the urls do appear in search results but their descriptions are blocked(!)
Did you try configuring URL parameters in WMT as a solution?
-
Hi Rafal,
The key part of that statement is "we might still find and index information about disallowed URLs...". If you read the next sentence it says: "As a result, the URL address and, potentially, other publicly available information such as anchor text in links to the site can still appear in Google search results".
If you look at moz.com/robots.txt you'll see an entry for:
Disallow: /pages/search_results*
But if you search this on Google:
site:moz.com/pages/search_results
You'll find there are 20 results in the index.
I used to agree with you, until I found out the hard way that if Google finds a link, regardless of whether it's in robots.txt or not it can put it in the index and it will remain there until you remove the nofollow restriction and noindex it, or remove it from the index using webmaster tools.
George
-
George,
I went to check with Google to make sure I am correct and I am!
"While Google won't crawl or index the content blocked by
robots.txt
, we might still find and index information about disallowed URLs from other places on the web." Source: https://support.google.com/webmasters/answer/6062608?hl=enYes, he can fix these problems on page but disallowing it in robots will work fine too!
-
Just adding this to robots.txt will not stop the pages being indexed:
Disallow: /*login?
It just means Google won't crawl the links on that page.
I would do one of the following:
1. Add noindex to the page. PR will still be passed to the page but they will no longer appear in SERPs.
2. Add a canonical on the page to: "www.exemple.com/user/login"
You're never going to try and get these pages to rank, so although it's worth fixing I wouldn't lose too much sleep on the impact of having duplicate content on registration pages (unless there are hundreds of them!).
Regards,
George
-
In GWT: Crawl=> URL Parameters => Configure URL Parameters => Add Parameter
Make sure you know what you are doing as it's easy to mess up and have BIG issues.
-
Add this line to your robots.txt to prevent google from indexing these pages:
Disallow: /*login?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Category Page Content
Hey Mozzers, I've recently been doing a content audit on the category and sub-category pages on our site. The old pages had the following "profile" Above The Fold
On-Page Optimization | | ATP
Page Heading
Image Links to Categories / Products
Below the Fold
The rest of the Image Links to Categories / Products
600 words+ of content duplicated from articles, sub categories and products My criticisms of the page were
1. No content (text) above the fold
2. Page content was mostly duplicated content
3. No keyword structure, many pages competed for the same keywords and often unwanted pages outranked the desired page for the keyword. I cleaned this up to the following structure Above The Fold
H1 Page Heading 80-200 Word of Content (Including a link to supporting article)
H2 Page Heading (Expansion or variance of the H1 making sure relevant) 80-200 150 Words of Content
Image Links to Categories / Products
Below the Fold
The rest of the Image Links to Categories / Products The new pages are now all unique content, targeted towards 1-2 themed keywords. I have a few worries I was hoping you could address. 1. The new pages are only 180-300 words of text, simply because that is all that is needed to describe that category and provide some supporting information. the pages previously contained 600 words. Should I be looking to get more content on these pages?
2. If i do need more content, It wont fit "above the fold" without pushing the products and sub categories below the fold, which isn't ideal. Should I be putting it there anyway or should I insert additional text below the products and below the fold or would this just be a waste.
3. Keyword Structure. I have designed each page to target a selction of keywords, for example.
a) The main widget pages targets all general "widget" terms and provides supporting infromation
b) The sub-category blue widget page targets anything related and terms such as "Navy Widgets" because navy widgets are a type of blue widget etc"
Is this keyword structure over-optimised or exactly what I should be doing. I dont want to spread content to thin by being over selective in my categories Any other critisms or comment welcome0 -
Should I remove 'local' landing pages? Could these be the cause of traffic drop (duplicate content)?
I have a site that has most of it's traffic from reasonably competitive keywords each with their own landing page. In order to gain more traffic I also created landing pages for counties in the UK and then towns within each county. Each county has around 12 towns landing pages within the county. This has meant I've added around 200 extra pages to my site in order to try and generate more traffic from long tail keywords. I think this may have caused an issue in that it's impossible for me to create unique content for each town/country and therefore I took a 'shortcut' buy creating unique content for each county and used the same content for the towns within it meaning I have lots of pages with the same content just slightly different page titles with a variation on town name. I've duplicated this over about 15 counties meaning I have around 200 pages with only about 15 actual unique pages within them. I think this may actually be harming my site. These pages have been indexed for about a year an I noticed about 6 months ago a drop in traffic by about 50%. Having looked at my analytics this town and county pages actually only account for about 10% of traffic. My question is should I remove these pages and by doing so should I expect an increase in traffic again?
On-Page Optimization | | SamCUK0 -
Duplicate content penalty
when moz crawls my site they say I have 2x the pages that I really have & they say I am being penalized for duplicate content. I know years ago I had my old domain resolve over to my new domain. Its the only thing that makes sense as to the duplicate content but would search engines really penalize me for that? It is technically only on 1 site. My business took a significant sales hit starting early July 2013, I know google did and algorithm update that did have SEO aspects. I need to resolve the problem so I can stay in business
On-Page Optimization | | cheaptubes0 -
Is there a way to tell Google a site has duplicated content?
Hello, We are joining 4 of our sites, into 1 big portal, and the content from each site gonna be inside this portal and sold as a package. We don't wanna kill these sites we are joining at this moment, we just wanna import their content into the new site and in a few months we will be killing them. Is there a way to tell Google to not consider the content on these small sites, so the new site don't get penalised? Thanks,
On-Page Optimization | | darkmediagroup0 -
Index Page Content
Mozers, I am of the believe and as a person who puts the utmost emphasis on the index page of any website I am trying to rank, especially with a new domain ... insuring content is relevant, structured, optimized and we have some link juice flowing in. I find once we get the index page ranked, Google's little bots then start to index and rank accordingly the rest of the website ... and we start producing results. We also develop websites (dare I say its where we expertise in) and unexpectantly the client has asked us to carry out SEO work additionally to their web development. Problem lies here, their index page, has absolutely no written content at all, just one large image with a logo (Fashion Website) ...Which I identify as a huge issue as per my explanation is paragraphs one or two. I am sure withe the many more qualified SEO experts and gurus within the SEOmoz community, you have also come across this issue So a few questions, if you don't mind adding advice. 1 - Am I putting too much emphasize on content within the index page, in terms of indexing and actually ranking ...yes I appreciate that terms within the website will be ranked against other pages other than the index page, but will it harm us for having no content at all within the index page 2 - If so, and yes is the answer to above, how do we handle it, we have spoke with the client and he is pretty adamant that he want the index page as is, he has been through out the whole website building process. As suggested, any advice would be really appreciated, its a difficult market to rank within a it is, and i can only see this index page making the task a lot more difficult Cheers John
On-Page Optimization | | Johnny4B0 -
Percentage of duplicate content allowable
Can you have ANY duplicate content on a page or will the page get penalized by Google? For example if you used a paragraph of Wikipedia content for a definition/description of a medical term, but wrapped it in unique content is that OK or will that land you in the Google / Panda doghouse? If some level of duplicate content is allowable, is there a general rule of thumb ratio unique-to-duplicate content? thanks!
On-Page Optimization | | sportstvjobs0 -
Will a "no follow" "no index" meta tag resolve duplicate content issue?
I have a duplicate content issue. If the page has already been indexed will a no follow no index tag resolve the issue or do I also need a rel canonical statement?
On-Page Optimization | | McKeeMarketing0 -
Potential Duplicate Title Tags On Sibling Pages
Edit I'll take the fall on this one, seems I could have asked my quesiton in a more clear manner. I was cruising other questions and finding a whole of answers that I suspect were not truly intended to help, but maybe help and earn Mozpoints. Wasn't fair of me to label those answering here with that. I will work better on the wording of my questions! 🙂 Edit Either I am asking my question poorly or I am learning there may be a rush to get points by throwing up any old answer...it very well may be the former which I am open to feedback on. Each page is to stand alone and hopefully rank well for the neighbourhood name and in conjunction with another relevant keyword phrase. There is no 'duplicate' version of any pages. * On a site there are numerous pages that provide real estate listings broken down by neighbourhood. Each containing similar content, a abbreviated version of the listings, often spanning 2 or 3 pages. These are 3rd level pages. Properties->Calgary Neighbourhoods->Evanston The title tags created are: Evanston Homes For Sale - NW Calgary Real Estate Panorama Hills Home For Sale - NW Calgary Real Estate Etc. for about 15 or so pages. Then they start again for another area of the city: Sagewood Homes For Sale - Airdrie Real Estate Woodside Homes For Sale - Airdrie Real Estate At this point there is no text on the actual page outside of the listings...an example of similar listings on another site - http://www.experiencerealtygroup.com/BaturynandDunluceHomes.ubr Do you think the SE's will see these as 'proper' use of the Title Tag or duplicate or other practices they tend to frown upon? It is a logical way of creating the title and obviously creating a unique version for each page would not only be tough to scale on some sites with 100's of these pages, they would become a little silly and not much use to the searcher in the SERPs Thanks for any help!
On-Page Optimization | | kyegrace1