Duplicate Content - Multiple URL's
-
I know a few of these problems come from products being in the same categories but I have no idea how to get rid of the url's that are showing duplicate content when the product is in the exact same place. Hard to explain, but here are URL examples.
Any Idea's how to fix / get rid of these URL's?
Thanks!
-
That must be really frustrating for you Mike.
I am not surprised you can't access them at the back-end as these are almost always a by-product of the e-commerce system. So many don't care about what extra churn the sites produce.
You could always robots out those bad pages? Just something like:
Disallow: /store/pc/www.ocelco.com/*
This should still readily allow everything else to be indexed after http://www.ocelco.com/store/pc/...
Give it a try and then run the site through Screaming Frog and see if these bad pages are then picked up.
-Andy
-
Andy,
I agree, I am not getting much help from our developers. All I know is these url's shouldn't be there and I can't even access them on the back end, only the one original product page.
I don't know how to remove them.
Thanks,
- Mike Bean
-
What are you using to power the site Mike?
I would suggest looking at the site configuration, or contact the developers and ask them how to correct this, as it looks a lot like a mis-configuration somewhere. 301's might help for now, but you need to figure out why these extra pages are being created because I suspect that if you add additional products, the same issue will surface again.
Also when I look at the first two links, I have a lot of rendering issues, that I don't see on the bottom 2 links. Definitely something amiss.
-Andy
-
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz is treating my pages as duplicate content but the pages have different content in reality
Attached here is a screenshot of links with duplicate content. Here are some links that is also based on the screenshot http://federalland.ph/construction_updates/paseo-de-roces-as-of-october-2015 http://federalland.ph/construction_updates/sixsenses-residences-tower-2-as-of-october-2015/ http://federalland.ph/construction_updates/sixsenses-residences-tower-3-as-of-october-2015 The links that I have placed here have different content. So I don't why they are treated as duplicates BWWJuvQ
Moz Pro | | clestcruz0 -
How to Avoid Duplicate Page Content errors when using Wordpress Categories & Tags?
I get a lot of duplicate page errors on my crawl diagnostics reports from 'categories' and 'tags' on my wordpress sites. The post is 1x link and then the content is 'duplicated' on the 'category' or 'tag' that is added to the page. Should I exclude the tags and categories from my sitemap or are these issues not that important? Thanks for your help Stacey
Moz Pro | | skehoe1 -
Root domain or 'www' subdomain for campaigns?
I am setting up a campaign and I am unsure whether I should specify the root domain (prohibitionhats.co.nz) or the 'www' subdomain (www.prohibitionhats.co.nz). Are there any guidelines for this or will Moz automatically aggregate the data for both irrespective of the domain you choose? Thanks Laurie
Moz Pro | | turnbullholdingsltd0 -
Changing the way SEOmoz Detects Duplicate Content
Hey everyone, I wanted to highlight today's blog post in case you missed it. In short, we're using a different algorithm to detect duplicate pages. http://moz.com/blog/visualizing-duplicate-web-pages If you see a change in your crawl results and you haven't done anything, this is probably why. Here's more information taken directly from the post: 1. Fewer duplicate page errors: a general decrease in the number of reported duplicate page errors. However, it bears pointing out that: **We may still miss some near-duplicates. **Like the current heuristic, only a subset of the near-duplicate pages is reported. **Completely identical pages will still be reported. **Two pages that are completely identical will have the same simhash value, and thus a difference of zero as measured by the simhash heuristic. So, all completely identical pages will still be reported. 2. Speed, speed, speed: The simhash heuristic detects duplicates and near-duplicates approximately 30 times faster than the legacy fingerprints code. This means that soon, no crawl will spend more than a day working its way through post-crawl processing, which will facilitate significantly faster delivery of results for large crawls.
Moz Pro | | KeriMorgret2 -
Where does the crawler find the urls?
The SEO Moz crawler has found a number of 500 error pages, and 404s etc which is very useful 🙂 however some of the urls are weird/broken formats we don't recognise and nobody remembers ever using - not weird enough to imply hacking, but something broken in the CMS Is there anyway to find out where the crawler found these urls? I can patch up and redirect the end result as best I can but I would prefer to fix plug the leak thanks 🙂
Moz Pro | | Fammy1 -
The keyword ranking report takes into account all my website urls? Can I specify the URLs where I want to track the keywords?
I don't know if my weekly reports are reporting the ranking of my keywords correctly. I have added some new keywords, since that all my reports are in red numbers. I don't know if this is happening because I did something wrong, or if is because my rankings are really falling down.
Moz Pro | | hockerty0 -
Duplicate page content due to Sort By dropdown
Hi there, I have over 150 Duplicate Page Title errors showing up in SEOMoz but on closer inspection these are related to the 'Sort By:' functionality on our ecommerce site that allows customers to sort our products by Price, Alphabetically etc. To give an example: http://www.parklanechampagne.co.uk/park-lane-champagne/special-occasions/easter Is showing as being duplicated by this page: http://www.parklanechampagne.co.uk/park-lane-champagne/special-occasions/easter?productlisting_page=1&sortorder=Price Does anyone know how I can resolve this? Any help greatly appreciated. Kind regards, Jon CDFyp.jpg
Moz Pro | | jonmorse860 -
We were unable to grade that page. We received a response code of 301\. URL content not parseable
I am using seomoz webapp tool for my SEO on my site. I have run into this issue. Please see the attached file as it has the screen scrape of the error. I am running an on page scan from seomoz for the following url: http://www.racquetsource.com/squash-racquets-s/95.htm When I run the scan I receive the following error: We were unable to grade that page. We received a response code of 301. URL content not parseable. This page had worked previously. I have tried to verify my 301 redirects and am unable to resolve this error. I can perform other on page scans and they work fine. Is this a known problem with this tool? I have verified ensuring I don't have it defined. Any help would be appreciated.
Moz Pro | | GeoffBatterham0