Moz crawl duplicate pages issues
-
Hi
According to the moz crawl on my website I have in the region of 800 pages which are considered internal duplicates. I'm a little puzzled by this, even more so as some of the pages it lists as being duplicate of another are not.
For example, the moz crawler considers page B to be a duplicate of page A in the urls below: Not sure on the live link policy so ive put a space in the urls to 'unlive' them.
Page A http:// nuchic.co.uk/index.php/jeans/straight-jeans.html?manufacturer=3751
Page B http:// nuchic.co.uk/index.php/catalog/category/view/s/accessories/id/92/?cat=97&manufacturer=3603
One is a filter page for Curvety Jeans and the other a filter page for Charles Clinkard Accessories. The page titles are different, the page content is different so Ive no idea why these would be considered duplicate. Thin maybe, but not duplicate.
Like wise, pages B and C are considered a duplicate of page A in the following
Page A http:// nuchic.co.uk/index.php/bags.html?dir=desc&manufacturer=4050&order=price
Page B http:// nuchic.co.uk/index.php/catalog/category/view/s/purses/id/98/?manufacturer=4001
Page C http:// nuchic.co.uk/index.php/coats/waistcoats.html?manufacturer=4053
Again, these are product filter pages which the crawler would have found using the site filtering system, but, again, I cannot find what makes pages B and C a duplicate of A.
Page A is a filtered result for Great Plains Bags (filtered from the general bags collection). Page B is the filtered results for Chic Look Purses from the Purses section and Page C is the filtered results for Apricot Waistcoats from the Waistcoat section.
I'm keen to fix the duplicate content errors on the site before it goes properly live at the end of this month - that's why anyone kind enough to check the links will see a few design issues with the site - however in order to fix the problem I first need to work out what it is and I can't in this case.
Can anyone else see how these pages could be considered a duplicate of each other please? Checking ive not gone mad!!
Thanks,
Carl
-
These days, content is king. It looks like there are a lot of similar internal links in the source code of these pages. When you have thin/or no content, your internal link profile stands out a lot more.
What helped me overcome this for my company is focusing on aggregating customer reviews and having my customer service team generate unique product descriptions. Social media was great for reviews. We offered small coupons at first, and now our customers want to send reviews. Unique product descriptions might be tough for clothes, but, it isn't impossible.
Having a ridiculously duplicated internal link profile and no content is almost as detrimental to your organic rankings as a spammy external linking profile. You want to look like an eCommerce site and not an online catalog.
-
Hi Adam,
Thanks for the response. I tested the canonical side of things but was finding that it was stopping the filtered pages being indexed. While we could get 'Dresses' page indexed we couldn't get 'Black Dresses' 'X retailer brand Dresses' etc indexed. We found this a bit of an issue. On the filtering page the tag always pointed back to the category root.
We are using an seo plugin for Magento so maybe i will need to go back to the dev and ask them. I accept that not putting canonical tag on the filtering could lead to internal duplicate content issues if a product can be found a dresses, red dresses, x brand dresses, x brand red dresses and via price.
Even though the side is still a work in progress we are already seeing the filtered pages getting indexed and ranking fairly well. So, for example (I don't think we rank for this one) we are ranking for term such as Black Size 12 Evening Dress. Sure, this term won't get millions of searches but long tail converts very well. As much as I would love to be no1 for Dresses we are not going to get there for a long long time. Especially given the No1 brand for the term is DA 86 and has hundreds of thousands of links and over 2.1m G+ shares.
We are in a tricky position with the website. Normally we could rank for the filtered terms by product page easy enough, however with all the product pages being pulled externally we need to find an alternative.
-
Hello Carl!
So I checked out the pages you listed and I've had similar issues on my e-Commerce stores. There isn't much text on e-Commerce site pages and there tends to be a ton of links so that always causes a problem for me. E-commerce stores and duplicate content go hand in hand, unfortunately.
I would suggest starting with adding canonical tags into your meta data. There's a few settings in Magento you can turn on and that should take care of some of the problem. Here's a good resource http://www.magentocommerce.com/knowledge-base/entry/canonical-meta-tag
From there you might want to consider making your meta descriptions on the products a bit more unique. Changing out one word (the product name) doesn't make it different or a non-duplicate. When the content is super thin, it's harder to make the pages, titles, and descriptions unique to search engines. With e-Commerce product pages, I understand the trouble with having text on filter pages…it's just not practical and doesn't look right. But it's important to optimize where you can…the meta descriptions. Here's another resource for that http://moz.com/ugc/our-forgotten-friend-the-meta-description
Hope that helps!
-
Might be worth me adding that I'm aware that all the product pages are duplicate content from other websites. The shop section of the website is an affiliate store. However, all the product pages are set as noindex to the search engines as a result of this. The internal link between the category pages and the product pages will be made nofollow in the coming days. If the engines cannot index the individual products then little point wasting bandwidth on them crawling 200,000 products!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
404 errors High Priority Issues in Moz Pro: change to 301 or not ?
Hi there, Moz Pro is showing us 404 errors on our site as High Priority Issues. These 404 errors regard deleted product pages, which we did not 301. Should we 301 them all backwards ? We have an ecommerce site. After reading How Should You Handle Expired Content? on Moz and a few other Q&A discussions I now know we should 301 each expired url and now we do so. My concern is with what was done in the past, and what we should do about it: for the past few years we have been leaving the pages on the site, creating a big amount of outdated url's without either content nor traffic in march our IT decided to delete these url's, and ask for a webpage removal in Google Search Console: we 301 only a 40 url's and 404 the other 3500 now 6 monthts after, we still have 2500 crawl errors in the Search Console, and Moz Pro finding each week new 404 errors Our SEO consultant says we should not bother about the errors shown in the Search Console. But I am concerned about these errors not reducing, and about Moz Pro High Priority Issues: should we 301 the url's to similar categories or products?
Moz Pro | | isabelledylag0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
2 different pages being shown as duplicate content.
I have a small problem with some of the pages on one of my websites.
Moz Pro | | horkans
Pages are shown as duplicate content when they have no content the same apart from the template. But it only happens with a few products and we have well over 100 products for sale. An example would be these which are seen as duplicate content.
http://www.petworlddirect.ie/p/mr-johnsons-supreme-rabbit-food-15kg/106006139
http://www.petworlddirect.ie/p/dreamscape-stone-bridge/187041111 Any help would be appreciated.0 -
Site explorer Issue
Hello, I'm looking to see in the Site Explorer the links coming from directories such as BOW, yahoo etc. I'm listed there from almost 1 year and these links are not listed, the same with my competitors. I'm missing something? Thank you Claudio
Moz Pro | | SharewarePros0 -
Crawl diagnostics incorrectly reporting duplicate page titles
Hi guys, I have a question in regards to the duplicate page titles being reported in my crawl diagnostics. It appears that the URL parameter "?ctm" is causing the crawler to think that duplicate pages exist. In GWT, we've specified to use the representative URL when that parameter is used. It appears to be working, since when I search site:http://www.causes.com/about?ctm=home, I am served a single search result for www.causes.com/about. That begs the question, why is the SEOMoz crawler saying there is duplicate page titles when Google isn't (doesn't appear under the HTML improvements for duplicate page titles)? A canonical URL is not used for this page so I'm assuming that may be one reason why. The only other thing I can think of is that Google's crawler is simply "smarter" than the Moz crawler (no offense, you guys put out an awesome product!). Any help is greatly appreciated and I'm looking forward to being an active participant in the Q&A community! Cheers, Brad
Moz Pro | | brad_dubs0 -
Duplicate page errors
I have 102 duplicate page title errors and 64 duplicate page content errors. They are almost all from the email a friend forms that are on each product of my online store. I looked and the pages are identical except for the product name. Is this a real problem and if so is there a work around or should I see if I can turn off the email a friend option? Thanks for any information you can give me. Cingin Gifts
Moz Pro | | cingingifts0 -
Dynamic URL pages in Crawl Diagnostics
The crawl diagnostic has found errors for pages that do not exist within the site. These pages do not appear in the SERPs and are seemingly dynamic URL pages. Most of the URLs that appear are formatted http://mysite.com/keyword,%20_keyword_,%20key_word_/ which appear as dynamic URLs for potential search phrases within the site. The other popular variety among these pages have a URL format of http://mysite.com/tag/keyword/filename.xml?sort=filter which are only generated by a filter utility on the site. These pages comprise about 90% of 401 errors, duplicate page content/title, overly-dynamic URL, missing meta decription tag, etc. Many of the same pages appear for multiple errors/warnings/notices categories. So, why are these pages being received into the crawl test? and how to I stop it to gauge for a better analysis of my site via SEOmoz?
Moz Pro | | Visually0 -
MOZ Crawler only crawling one page per campaign
We set up some new campaigns, and now for the last two weekly crawls, the crawler is only accessing one page per campaign. Any ideas why this is happening? PS - two weeks back we did "upgrade" the account. Could this have been an issue?
Moz Pro | | AllaO0