Roger keeps telling me my canonical pages are duplicates
-
I've got a site that's brand spanking new that I'm trying to get the error count down to zero on, and I'm basically there except for this odd problem. Roger got into the site like a naughty puppy a bit too early, before I'd put the canonical tags in, so there were a couple thousand 'duplicate content' errors. I put canonicals in (programmatically, so they appear on every page) and waited a week and sure enough 99% of them went away.
However, there's about 50 that are still lingering, and I'm not sure why they're being detected as such. It's an ecommerce site, and the duplicates are being detected on the product page, but why these 50? (there's hundreds of other products that aren't being detected). The URLs that are 'duplicates' look like this according to the crawl report:
http://www.site.com/Product-1.aspx
http://www.site.com/product-1.aspx
And so on. Canonicals are in place, and have been for weeks, and as I said there's hundreds of other pages just like this not having this problem, so I'm finding it odd that these ones won't go away.
All I can think of is that Roger is somehow caching stuff from previous crawls? According to the crawl report these duplicates were discovered '1 day ago' but that simply doesn't make sense. It's not a matter of messing up one or two pages on my part either; we made this site to be dynamically generated, and all of the SEO stuff (canonical, etc.) is applied to every single page regardless of what's on it.
If anyone can give some insight I'd appreciate it!
-
ThompsonPaul -
Thanks for that info, it pretty much nails exactly what I had discovered independently. This is an IIS7/Win2k8R2 install so luckily the rewriting is a bit easier than in previous iterations. The whole platform is hand coded by us (after the 10th ecommerce site or so you can generally do them in your sleep) so I don't have to worry about CMS implementation and the like, and luckily we already knew that about the spaces so they simply aren't allowed in the filenames. I'm in the middle of making a regex right now that is going to down-case anything in an href="" or src="" tag that will hopefully handle everything on the site side user-created or not. Will consider what to do in regards to external links a bit down the road I think.
-
Valery, you're definitely going to want to normalize your URLs to lowercase. This is a quirk of IIS that it actually respects case in URLs and will consider different case URLs as different pages.
In addition to the search engine problems it creates, it's also a major problem for usabilty - yours and your users. For example, a user who is trying to type in a direct URL can get a 404 error depending on what case they use.
More importantly, your Google Analytics will report on each of those version as separate pages, unless you write a normalizing filter into your GA profiles. Better to do that normalization for the actual site, not just your analytics
While rel=canonical can resolve a number of issues, I've always found it vastly better to correct the actual problem at its root, rather than rely on canonicalization as a catch-all. Anecdotally, I've found correcting issues like this with rewrites seems to allow affected pages to rank better than when just corrected with canonicalization. WIsh I could find time to do an actual case-study on that
Managing rewrites on IIS servers will require a plugin like asapi-rewrite as IIS doesn't handle it natively.
P.S. IIS will also allow and respect spaces in URLs. Users in Internet Explorer will see them as normal with spaces but browsers like Firefox will insert the html entity for a space (%20) into each necessary spot in the URL. This is again a mess for usability, so much better to force rewrite of all URLs to replace spaces with dashes when creating new pages. Many CMSs have plugins for this or you can also use sitewide rewrites to do it after the fact.
-
I think I get your point; the canonical is pointing to where the juice should go, but the URLs are still functionally different things. I'm guessing some sort of URL rewrite is in order, and to standardize how I do in-text links on the site (with user-editable content this part could be a pain).
-
Hey Valery,
I see those on closer inspection. I know it looks weird, but that's accurate. Your server must be UNIX or Linux so they will actually treat case as a different word.
For example: banana.com/pancakes.html would be treated differently than banana.com/PanCakes.html.
So if you have any pages generated dynamically or otherwise that differ only in case, then they will be tagged as duplicate.
In your CSV file you can see the duplicates being caused by case. I'd also be happy to help provide a few specific examples but would want to generate a ticket for you so we don't divulge any private information.
Cheers,
Joel.
-
Joel -
Thanks a lot for looking into that. The pages are very similar, so I'm not surprised they're being duplicate triggered; but what does surprise me is that they are apparently being considered duplicate to a canonical version of themselves? When I click on the duplicate list I'm expecting to see:
Product1.aspx
Product1-Blue.aspx
Product1-Red.aspx
But instead I'm seeing:
Product1.aspx
product1.aspx
product1.ASPX
And so on. The first scenario to me implies that the 3 pages are duplicate to each other, whereas the second is saying that there's either a canonical problem or I literally have different-case versions of those files.
-
Hi Valery,
I took a peek at your campaign and it looks like those few remaining duplicate pages are in fact different, but very minor differences. Basically there's pages for different sizes of things.
While being different, they vary in such minute ways that Roger see's them as duplicates.
I Hope that answers the question.
Thanks,
Joel.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
URL Parameters causing duplicate content - Login/Registration page
All, I just recently acquired a new client and right away I noticed an abundance of duplicate content being recorded after the moz crawl diagnostics was completed. After a quick digest of the issue, it seems that the majority (90%) of the outlined duplicated content is stemming from the client's Login/Registration page. Upon clicking (without being logged-in) any asset or forum discussion board link within the site, the user is automatically redirected to the Login/Registration page, which seems to create this massive redirect loop associated with dynamic url parameters. Ex. After clicking on a select internal link (asset or discussion board) the user is redirected to the Login/Register page which presents the page and a URL that looks a lot this this: Ex. 1 https://www.clientsite.com/register-login?ReturnUr...xxxx%xxxx%xxxx%...... Ex. 2 https://www.clientsite.com**/register-login?returnurl=/register-login?returnurl=/register-login?returnurl=/page-titl**e/ These URLs seem to becoming larger and larger... The client wants to ensure users have to Login/Register within their site before they're allowed to view the content. This process doesn't allow for any type of preview page to be viewed by a user prior to clicking on the internal link, which in turn doesn't allow any preview pages to be indexed. Right now, Moz is picking up all of the redirect and labeling them as duplicate page content/duplicate page titles based on the Login/Registration page. Questions/Comments: Would it be wise to create preview pages for the asset pages and discussion board pages to allow for proper indexing? - Could this be a CMS issue? Current being used on this is, Kentico. There are thousands of pages being recorded in the crawl as duplicate, however only 14 seem to be indexing with duplicate title tags. 301 or canonical redirect strategy? Moz crawl data issue? Again, this is my first look at this issue, so more information is bound to come out soon! Please let me know if anyone has run into this issue and if you have a possible solution to get rid of this redirect loop process. Thanks! -T
Moz Pro | | MattLacuesta0 -
Moz crawl only shows 2 pages, but we have more than 1000 pages.
Hi Guys Is there anyway we can test Moz crawler ? it showing only 2 pages crawls. We are running website on HTTPS ? Is HTTPS is issues for Moz ?
Moz Pro | | dotlineseo0 -
Re : Duplicate Content
Hello, I am a pro member, in my campaign it says duplicate content for few urls. which i m not able to understand, because both the url's are same but why its showing under duplicate content. here are the urls example. http://www.giftbig.com/helios-gift-card.html http://www.giftbig.com/helios-gift-card.html/
Moz Pro | | dasjoy850 -
URL paramters and duplicate content
Hello, I have a 2-fold question: Crawl Diagnostics is picking up a lot of Duplicate Page Title errors, and as far as I can tell, all of them are cause by URL parameters trailing the URL. We use a Magento store, and all filtering attributes, categories, product pages etc are tagged on as URL parameters. example: Main URL:
Moz Pro | | yacpro13
/accessories.html Duplicated Title Page URLs: /accessories.html?dir=asc&order=position
/accessories.html?mode=list
/accessories.html?mode=grid
...and many others How can I make the Crawl Diagnostics not identify these as errors? Now from an SEO point of view, all these URL parameters are been picked up by google, and are listed in WedMaster Tools -> URL parameters. All URL parameters are set to "let google decide". I remember having read that Google was smart enough here to make the right decision, and we shouldn't have to worry about it. Is this true, or is there a larger issue at hand here? Thankas!0 -
Duplicate Page Content and Title - Miva - How to fix?
Hi, I'm new to SEOmoz and just diving into it. I'm feeling a bit overwhelmed. I use Miva Merchant as my storefront interface. SEMOz is returning a bunch of duplicate page content and duplicate page titles and I can't figure out what to do about it. It seems it may have something to do with Miva shortlinks. I click on the dup URL's in SEMOz and it brings me to a dead page. I can't figure out where it's coming from. I know without seeing the actual information it'll probably be tough to help me but any suggestions would be appreciated. I try to fix them and come to a point (after about three hours of getting nowhere) it becomes too frustrating. Thanks!
Moz Pro | | musicforkids
Gary0 -
Why would the SEOMoz Page analysis pick up exact keywords used in page title and text?
Hi, I am trying to optimise this URL : www.adaptiveconsultancy.com/ecommerce/features/advanced-ecommerce with the keyword being 'advanced ecommerce' With the 'On-Page Report Card' from SEOMoz that the exact keyword isn't featured in the page title or text, but it is in there. Why would this not be picked up? Thank you in advance,
Moz Pro | | adaptiveconsultancy
M0 -
Dynamic URL pages in Crawl Diagnostics
The crawl diagnostic has found errors for pages that do not exist within the site. These pages do not appear in the SERPs and are seemingly dynamic URL pages. Most of the URLs that appear are formatted http://mysite.com/keyword,%20_keyword_,%20key_word_/ which appear as dynamic URLs for potential search phrases within the site. The other popular variety among these pages have a URL format of http://mysite.com/tag/keyword/filename.xml?sort=filter which are only generated by a filter utility on the site. These pages comprise about 90% of 401 errors, duplicate page content/title, overly-dynamic URL, missing meta decription tag, etc. Many of the same pages appear for multiple errors/warnings/notices categories. So, why are these pages being received into the crawl test? and how to I stop it to gauge for a better analysis of my site via SEOmoz?
Moz Pro | | Visually0 -
RSS feed showing up as duplicate content
Hi, I've just run an SEOMOZ Pro scan for the first time and it is picking up duplicate content errors from the RSS feed. For some reason it seems to be picking up two feeds, for example: http://blog.clove.co.uk/2009/05/13/htc-touch-diamond2-review/feed/ http://blog.clove.co.uk/2009/05/19/htc-touch-diamond2-review-2/feed/ Does anyone know why this is happening and how I can resolve this? Thanks
Moz Pro | | pugh0