Duplicate content error?
-
I am seeing an error for duplicate content for the following pages:
http://www.bluelinkerp.com/contact/
http://www.bluelinkerp.com/contact/index.asp
Doesn't the first URL just automatically redirect to the default page in that directory (index.asp)? Why is it showing up as separate duplicate pages?
-
@Streamline is right - as soon as the engines encounter both versions, they see it as two pages. It's no problem for human visitors, but it can create issues with duplicate URLs in the Google index. You can either 301-redirect the "index.asp" version back to the cleaner, root URL or use a canonical. ASP/.Net can be weird about 301s, so the canonical is probably easier.
Generally, we suggest people canonical to the shorter/friendlier version, but the trouble here is that you're using the "/index.asp" version in your internal links. If you can change the internal links to the "/contact/" version, I'd prefer that, but if not, then set the canonical tag to the "/index.asp" version. The most important thing is consistency. If you link to one version but canonical to the other, Google could ignore your canonical tag. Put simply, your canonical URL isn't really canonical, in that case.
-
Thanks for the response! Is it best practice to specify the canonical URL as the "unspecific" link? Should I not rather specify the canonical URL as "http://www.bluelinkerp.com/contact/index.asp"?
-
They're two different URLs.
If the URL changes but the content stays the same then it's classed as duplicate content.
I feel your pain though - the amount of duplicate pages I've ended up with just because copywriters like to capitalize their words...
-
There's several possible ways the search engines could have come across both versions of that page. If I had to guess, it's because somewhere on the web or even on your own website there are links to both URLs. It's a pretty common issue but one that is easily resolved with the rel="canonical" tag.
Simply put the following code in the header of that page -
This tells the search engines to use your designated URL anytime they access that page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is The Number of Duplicate Pages reduced after adding canonical ref to the dupe versions ?
Hi Is the number of duplicate pages reported in a dupe page content error report reduced on subsequent crawls, if you have resolved the dupe content problem via adding the canonical tag to duplicate versions (referring the original page). Like it would if you were solving the problem via a 301 redirect (i think/presume) ? Cheers Dan
Moz Pro | | Dan-Lawrence0 -
403 error for a member site
Perhaps a stupid question but SEOmoz registers 403 errors for pages behind a membersite (ie. they are restricted on purpose). Should I noindex these pages or just let SEOmoz register these "errors"?
Moz Pro | | Crunchii0 -
Question about Crawl Diagnostics - 4xx (Client Error) report
Hi here, I was wondering if there is a way to find out the originating page where a broken link is found from the 4xx (Client Error) report. I can't find a way to know that, and without that information is very difficult for me to fix any possible 404 related issues on my website. Any thoughts are very welcome! Thank you in advance.
Moz Pro | | fablau0 -
How can I prevent errors of duplicate page content generated by my tags from my wordpress on-site blog platform?
When I add meta data and a canonical reference to my blog tags for my on-site blog which works using a wordpress.org template, Roger generates errors of duplicate content. How can I avoid this problem? I want to use up to 5 tags per post, with the same canonical reference and each campaign scan generates errors/warnings for me!
Moz Pro | | ZoeAlexander0 -
Duplicate page content on / and index.php
Hi I am new to SEOmoz and in the crawl diagnostics for one of my clients it came back duplicate content on the homepage www.myclient.co.uk and on the www.myclient.co.uk/index.php which is obviously the same page. I understand that the key is to do a 301 redirect from the index to /, however how will I know that this will not just create an ever ending loop on the server? From your experience how is the best way to tackle this crawl error? Also is there a specific question that I need to ask the server?
Moz Pro | | search_shop0 -
Error on SEOMoz When Trying to Track Website. Please Advise
Hi, I'm trying to start a new campaign for a root domain, but I'm getting the "Roger found an error" and am not sure what to make of it. Error #1: "You've decided to set up a root domain campaign, but entered the subdomain path: www.siteurl.com. Don't worry, we'll switch that for you and crawl everything on the subdomain: www.siteurl.com. If you meant to set this up to only crawl pages in the root domain, click 'Go back and Change" and enter a root domain URL in step 1." Error #2: "Oops! The root domain siteurl.com redirects to a domain that is not within the specified root domain (www.siteurl.com). This will cause us to stop crawling as the first discovered page falls outside of the root domain you've defined. Please make sure you enter a root domain that resolves to a page that is under the root domain." What does this mean? Is there something I am doing wrong? The first error is what returned when I input www.siteurl.com. The second was returned when I put just siteurl.com. I didn't put up the exact URL for privacy reasons, but if you really do want to help me out, PM me and I can give you the real URL. Thanks in advance!
Moz Pro | | locallyrank0 -
Why am I getting an access error when creating my first campaign?
The exact error message is: The change you wanted was rejected. Maybe you tried to change something you didn't have access to. I checked the Word Count and I am definitely <300 keywords. I have also made sure that the Branded Keywords are entered 1 per entry form. My cookies are clear and there should be no issue with my browser.
Moz Pro | | trufflelabs0 -
We were unable to grade that page. We received a response code of 301\. URL content not parseable
I am using seomoz webapp tool for my SEO on my site. I have run into this issue. Please see the attached file as it has the screen scrape of the error. I am running an on page scan from seomoz for the following url: http://www.racquetsource.com/squash-racquets-s/95.htm When I run the scan I receive the following error: We were unable to grade that page. We received a response code of 301. URL content not parseable. This page had worked previously. I have tried to verify my 301 redirects and am unable to resolve this error. I can perform other on page scans and they work fine. Is this a known problem with this tool? I have verified ensuring I don't have it defined. Any help would be appreciated.
Moz Pro | | GeoffBatterham0