Duplicate Page content | What to do?
-
Hello Guys,
I have some duplicate pages detected by MOZ. Most of the URL´s are from a registracion process for users, so the URL´s are all like this:
www.exemple.com/user/login?destination=node/125%23comment-form
What should I do? Add this to robot txt? If so how? Whats the command to add in Google Webmaster?
Thanks in advance!
Pedro Pereira
-
Hi Carly,
It needs to be done to each of the pages. In most cases, this is just a minor change to a single page template. Someone might tell you that you can add an entry to robots.txt to solve the problem, but that won't remove them from the index.
Looking at the links you provided, I'm not convinced you should deindex them all - as these are member profile pages which might have some value in terms of driving organic traffic and having unique content on them. That said I'm not party to how your site works, so this is just an observation.
Hope that helps,
George
-
Hi George,
I am having a similar issue with my site, and was looking for a quick clarification.
We have several "member" pages that have been created as a part of registration (thousands) and they are appearing as duplicate content. When you say add noindex and and a canonical, is this something that needs to be done to every individual page or is there something that can be done that would apply to the thousands of pages at once?
Here are a couple of examples of what the pages look like:
http://loyalty360.org/me/members/8003
http://loyalty360.org/me/members/4641
Thank you!
-
1. If you add just noindex, Google will crawl the page, drop it from the index but it will also crawl the links on that page and potentially index them too. It basically passes equity to links on the page.
2. If you add nofollow, noindex, Google will crawl the page, drop it from the index but it will not crawl the links on that page. So no equity will be passed to them. As already established, Google may still put these links in the index, but it will display the standard "blocked" message for the page description.
If the links are internal, there's no harm in them being followed unless you're opening up the crawl to expose tons of duplicate content that isn't canonicalised.
noindex is often used with nofollow, but sometimes this is simply due to a misunderstanding of what impact they each have.
George
-
Hello,
Thanks for your response. I have learn more which is great
My question is should I add a noindex only to that page or a noidex, nofolow?
Thanks!
-
Yes it's the worst possible scenario that they basically get trapped in SERPs. Google won't then crawl them until you allow the crawling, then set noindex (to remove from SERPS) and then add nofollow,noindex back on to keep them out of SERPs and to stop Google following any links on them.
Configuring URL parameters again is just a directive regarding the crawl and doesn't affect indexing status to the best of my knowledge.
In my experience, noindex is bulletproof but nofollow / robots.txt is very often misunderstood and can lead to a lot of problems as a result. Some SEOs think they can be clever in crafting the flow of PageRank through a site. The unsurprising reality is that Google just does what it wants.
George
-
Hi George,
Thanks for this, It's very interesting... the urls do appear in search results but their descriptions are blocked(!)
Did you try configuring URL parameters in WMT as a solution?
-
Hi Rafal,
The key part of that statement is "we might still find and index information about disallowed URLs...". If you read the next sentence it says: "As a result, the URL address and, potentially, other publicly available information such as anchor text in links to the site can still appear in Google search results".
If you look at moz.com/robots.txt you'll see an entry for:
Disallow: /pages/search_results*
But if you search this on Google:
site:moz.com/pages/search_results
You'll find there are 20 results in the index.
I used to agree with you, until I found out the hard way that if Google finds a link, regardless of whether it's in robots.txt or not it can put it in the index and it will remain there until you remove the nofollow restriction and noindex it, or remove it from the index using webmaster tools.
George
-
George,
I went to check with Google to make sure I am correct and I am!
"While Google won't crawl or index the content blocked by
robots.txt
, we might still find and index information about disallowed URLs from other places on the web." Source: https://support.google.com/webmasters/answer/6062608?hl=enYes, he can fix these problems on page but disallowing it in robots will work fine too!
-
Just adding this to robots.txt will not stop the pages being indexed:
Disallow: /*login?
It just means Google won't crawl the links on that page.
I would do one of the following:
1. Add noindex to the page. PR will still be passed to the page but they will no longer appear in SERPs.
2. Add a canonical on the page to: "www.exemple.com/user/login"
You're never going to try and get these pages to rank, so although it's worth fixing I wouldn't lose too much sleep on the impact of having duplicate content on registration pages (unless there are hundreds of them!).
Regards,
George
-
In GWT: Crawl=> URL Parameters => Configure URL Parameters => Add Parameter
Make sure you know what you are doing as it's easy to mess up and have BIG issues.
-
Add this line to your robots.txt to prevent google from indexing these pages:
Disallow: /*login?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does a / at the end of a URL create a duplicate page?
Hello, I have just used (the amazing) Screaming Frog to check my site and it is listing the two following pages as having duplicate titles, making me think it is seeing them as duplicate pages. http://zenplugs.com/zenplugs-molded-earphones/ http://zenplugs.com/zenplugs-molded-earphones Do I need to redirect one of these? Thanks in advance! Toby
On-Page Optimization | | T0BY0 -
Avoiding Duplicate Title Tags and Duplicate Content
Hi - I have a question on how to both avoid duplicate title tags and duplicate content AND still create a good user experience. I have a lot of SEO basics to do as the company has not done any SEO to this point. I work for a small cruise line. We have a page for each cruise. Each cruise is associated with a unique itinerary. However the ports of call are not necessarily unique to each itinerary. For each port on the itinerary there are also a set of excursions and if the port is the embark/disembark port, hotels that are associated. The availability of the excursions and hotels depends on the dates associated with the cruise. Today, we have two pages associated with each cruise for the excursions and hotels: mycruisecompany.com/cruise/name-of-cruise/port/excursion/?date=dateinport mycruisecompany.com/cruise/name-of-cruise/port/hotel/?date=dateinport When someone navigates to these pages, they can see a list of relevant content. From a user perspective the list I see is only associated with the relevant date (which is determined by a set of query parameters). Unfortunately, there are situations where the same content is on multiple pages. For instance the exact same set of hotels or excursions might be available for two different cruises or on multiple dates of the same cruise. This is causing a couple of different challenges. For instance, with regard to title tags, we have <title>Hotels in Rome</title> multiple times. I know that isn't good. If I tried to just have a hub page with hotels and a hub page with excursions available from each cruise and then a page for each hotel and excursion, each with a unique title tag, then the challenge is that I don't know how to not make the customer have to work through whether the hotel they are looking for is actually available on the dates in question. So while I can guarantee unique content/title tags, I end up asking the user to think too much. Thoughts?
On-Page Optimization | | Marston_Gould1 -
Duplicate Content, Same Company?
Hello Moz Community, I am doing work for a company and they have multiple locations. For example, examplenewyork.com, examplesanfrancisco.com, etc. They also have the same content on certain pages within each website. For example, examplenewyork.com/page-a has the same content as examplesanfrancisco.com/page-a Does this duplicate content negatively impact us? Or could we rank for each page within each location parameter (for example, people in new york search page-a would see our web page and people in san fran search page-a would see our web page)? I hope this is clear. Thanks, Cole
On-Page Optimization | | ColeLusby0 -
Duplicate Content - What can be duplicate in two different product pages.
I am having a hard time understanding how my 3 different product pages are being shown up as Duplicate Content in s crawl. Some of my 21 different pages are being shown as duplicate content. Here are 3 of those: 1. http://champu.in/korn-rock-band-mens-round-neck-t-shirt-india 2. http://champu.in/stop-the-burning-mens-round-neck-t-shirt-india 3. http://champu.in/funny-t-shirts/absolut-punjabi-red-men-s-round-neck-t-shirt Can someone help me with this. Thanks in advance 🙂
On-Page Optimization | | sidjain4you0 -
Nice looking ecommerce menus with featured product categories - bad for SEO due to duplicate content?
My ecommerce website has menus which contain 'featured product sub-categories'. These are shown alongside the other product sub-category links. Each 'featured product category' includes a link, an image (with link) and some text. All menu content is visible to search engines. These menus look nice and probably encourage CTR (not tested!) but are they bad for SEO?
On-Page Optimization | | Coraltoes771 -
Events in Wordpress Creating Duplicate Content Canonical Issues
Hi, I have a site which uses Event Manager Pro within Wordpress to create Events (as custom post types on my blog. I use it to advertise cookery classes. In a given month I might run one type of class 4 times. The event page I have made for each class is the same and I duplicate it 4 times and just change the dates to promote it. The problem is with over 10 different classes, which are then duplicated up to 4 times each per month. I get loads of duplicate content errors. How can I fix this without redirecting people away from the correct page for the date they are interested in? Is it best just to use a no follow for ALL events and rely on the other parts of my site for SEO? Thanks, T23
On-Page Optimization | | tekton230 -
Is there any benefit in on-site duplicate content?
I have about 50 internal pages on my site that I want to add a "Do it yourself tutorial" to in an effort to build the quality of the pages. Is this going to de-value the content if I put it on all 50 pages? It's difficult to write similar content 50 different ways.
On-Page Optimization | | BradBorst0 -
Urgent, Duplicate page title and content at eCommerce site- how to solve
Hi, there, does anyone can help to solve 'duplicate page title, duplicate page content' problem? it is a eCommerce site, each categories has hundreds of products, so there are more than 10 pages, but the report crawl the errors, i totally have no idea, can anyone help? Thanks a lot! Anna
On-Page Optimization | | anna-2944510