Removing Duplicate Pages
-
Hi everyone. I'm sure this falls under novice seo question. But how do i remove duplicate pages from my site. I have not created the pages per say. Their may be a an internal link on a page that links to the page causing the duplication. Do i remove the internal link here is a sample of a duplicate page
I know the url is way too long. working on it
Thanks for your feedbacks.
-
After comparing your links, I see that there are a few parameters that your ticketing platform added to the end of the URLs, like ReturnURL and CntPageID. You can go into your Google and Bing Webmaster Tools, and Yahoo Site Explorer as well, and tell these search engines to ignore those parameters on all URLs of your site. I'd also recommend canonical URLs if you have the option in your ticketing system. Now I'm not sure whether or not those parameter handling settings will be sent to SEOmoz when linking to Google WMT. Anybody?
-
Looks like you are having dynamic page canonicalization i.e duplicate pages are generated dynamically.
You can set the rel=canonical tags to solve the duplicate content issue.
As Daniel mentioned below it is still not clear on how these pages are being generated and linked.
-
Your question does not make any sense. How do you remove a duplicate page? Delete it. If it is being caused by some backend issues, then use the rel=canonical tag to specify the original page or 301 redirect the duplicates to the original if you can do that.
In your example I'm not sure what you are asking. The URL's are cut off in your question as well as on my browser, are these examples of two different URL's showing an identical page? Are you saying there is an internal link somewhere that points to the wrong place, and it is returning a duplicate of the original page or something? Then change the internal link to point to the right place.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why does Google's search results display my home page instead of my target page?
Why does Google's search results display my home page instead of my target page?
Technical SEO | | h.hedayati6712365410 -
Getting high priority issue for our xxx.com and xxx.com/home as duplicate pages and duplicate page titles can't seem to find anything that needs to be corrected, what might I be missing?
I am getting high priority issue for our xxx.com and xxx.com/home as reporting both duplicate pages and duplicate page titles on crawl results, I can't seem to find anything that needs to be corrected, what am I be missing? Has anyone else had a similar issue, how was it corrected?
Technical SEO | | tgwebmaster0 -
Best way to deal with over 1000 pages of duplicate content?
Hi Using the moz tools i have over a 1000 pages of duplicate content. Which is a bit of an issue! 95% of the issues arise from our news and news archive as its been going for sometime now. We upload around 5 full articles a day. The articles have a standalone page but can only be reached by a master archive. The master archive sits in a top level section of the site and shows snippets of the articles, which if a user clicks on them takes them to the full page article. When a news article is added the snippets moves onto the next page, and move through the page as new articles are added. The problem is that the stand alone articles can only be reached via the snippet on the master page and Google is stating this is duplicate content as the snippet is a duplicate of the article. What is the best way to solve this issue? From what i have read using a 'Meta NoIndex' seems to be the answer (not that i know what that is). from what i have read you can only use a canonical tag on a page by page basis so that going to take to long. Thanks Ben
Technical SEO | | benjmoz0 -
Over 700+ duplicate content pages -- help!
I just signed up for SEO Moz pro for my site. The initial report came back with over 700+ duplicate content pages. My problem is that while I can see why some of the content is duplicated on some of the pages I have no idea why it's coming back as duplicated. Is there a tutorial for a novie on how to read the duplicate content report and what steps to take? It's an e-commerce website and there is some repetitive content on all the product pages like our "satisfaction guaranteed" text and the fabric material... and not much other text. There's not a unique product description because an image speaks for itself. Could this be causing the problem? I have lots of URLs with over 50+ duplicates. Thx for any help.
Technical SEO | | Santaur0 -
Duplicate Content on SEO Pages
I'm trying to create a bunch of content pages, and I want to know if the shortcut I took is going to penalize me for duplicate content. Some background: we are an airport ground transportation search engine(www.mozio.com), and we constructed several airport transportation pages with the providers in a particular area listed. However, the problem is, sometimes in a certain region multiple of the same providers serve the same places. For instance, NYAS serves both JFK and LGA, and obviously SuperShuttle serves ~200 airports. So this means for every airport's page, they have the super shuttle box. All the provider info is stored in a database with tags for the airports they serve, and then we dynamically create the page. A good example follows: http://www.mozio.com/lga_airport_transportation/ http://www.mozio.com/jfk_airport_transportation/ http://www.mozio.com/ewr_airport_transportation/ All 3 of those pages have a lot in common. Now, I'm not sure, but they started out working decently, but as I added more and more pages the efficacy of them went down on the whole. Is what I've done qualify as "duplicate content", and would I be better off getting rid of some of the pages or somehow consolidating the info into a master page? Thanks!
Technical SEO | | moziodavid0 -
Mitigating duplicate page content on dynamic sites such as social networks and blogs.
Hello, I recently did an SEOMoz crawl for a client site. As it typical, the most common errors were duplicate page title and duplicate content. The client site is a custom social network for researchers. Most of the pages that showing as duplicate are simple variations of each user's profile such as comment sections, friends pages, and events. So my question is how can we limit duplicate content errors for a complex site like this. I already know about the rel canonical tag, and rel next tag, but I'm not sure if either of these will do the job. Also, I don't want to lose potential links/link juice for good pages. Are there ways of using the "noindex" tag in batches? For instance: noindex all urls containing this character? Or do most CMS allow this to be done systematically? Anyone with experience doing SEO for a custom Social Network or Forum, please advise. Thanks!!!
Technical SEO | | BPIAnalytics0 -
Video thumbnail pages with "sort" feature -- tons of duplicate content?
A client has 2 separate pages for video thumbnails. One page is "popular videos" with a sort function for over 700 pages of video thumbnails with 10 thumbnails and short desriptions per page. (/videos?sort_by=popularity). The second page is "latest videos" (/videos?sort_by=latest) with over 7,000 pages. Both pages have a sort function -- including latest, relevance, popularity, time uploaded, etc. Many of the same video thumbnails appear on both pages. Also, when you click a thumbnail you get a full video page and these pages appear to get indexed well. There seem to be duplicate content issues between the "popular" and "latest" pages, as well as within the sort results on each of those pages. (A unique URL is generated everytime you use the sort function i.e. /videos?sort_by=latest&uploaded=this_week). Before my head explodes, what is the best way to treat this? I was thinking a noindex,follow meta robot on every page of thumbnails since the individual video pages are well indexed, but that seems extreme. Thoughts?
Technical SEO | | 540SEO0 -
Catch 22 on duplicate page titles
Hi all, I'm quite new to the SEO space so I apologise if all the information below isn't technically perfect. I ran the SEOmoz pro tool for the first time a month ago (fantastic tool). It picked up a wealth of errors on our site that we are now working on. the problem: we use dynamic pages to display job listings pulled from our database that have picked up many duplicate page titles and content. For example: _Landing page: _http://www.arm.co.uk/jobs/it-contract-jobs/sec=itcontractjobs _Page 2: _http://www.arm.co.uk/jobs/1/-/-/2/itcontractjobs-/9999/2 _Page 3: _http://www.arm.co.uk/jobs/1/-/-/2/itcontractjobs-/9999/3 Following the results of the Moz tool we have now 'no indexed' and 'no followed' the dynamic pages and the errors have dramatically dropped, great! However, on reflection we generate quite a lot of traffic to individual job's listed on our website. By no following the pages we have restricted passing on any 'juice' to these pages, and by no indexing we may be taking them out of Googles index completely. These dynamic pages and individual job listings do generate a lot of traffic to our website via organic search. We do submit the site index to Google that should index the individual jobs that way. So, the question is (I hope this is making sense), are the gains of reducing errors picked up in the moz tool (to improve the overall site performance) likely to outweigh the traffic generated on these dynamically generated pages by being indexed and followed by Google. Ultimately we would like the static landing pages to retain a stronger page rank. Any guidance is very much appreciated. Best Regards,
Technical SEO | | ARMofficial
Sam.0