Www vs no-www duplicate fix?
-
Hi all, I have more or less published two versions of our site. One on "www" and one without. And of course we uncovered it during our SEO crawl as "duplicate" content/titles. My guess (hope) is this is something that can be easily fixed on the server side, but I don't have a lot of knowledge around it. Does anyone know?
-
-
One more follow up - what about for an IIS server?
-
This will make it non www. If you want it to default to www use this:
RewriteEngine On
RewriteCond %{HTTP_HOST} ^domain.com [NC]
RewriteRule (.*) http://www.domain.com/$1 [L,R=301] -
Quick question on this - will this code set it sobe setting it so the url will read the "www" or without?
-
Thank you both for your responses! I was hoping it would be something as simple as this.
Best,
Becky
-
also go into Google Webmaster tools and set the www preference under configuration
-
Easy fix:
in your .htaccess file, use this
RewriteEngine On RewriteCond %{HTTP_HOST} !^domain\.com RewriteRule (.*) http://domain.com/$1 [R=301,L] Remember to replace domain.com with your domain name. Enjoy!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Personalized Content Vs. Cloaking
Hi Moz Community, I have a question about personalization of content, can we serve personalized content without being penalized for serving different content to robots vs. users? If content starts in the same initial state for all users, including crawlers, is it safe to assume there should be no impact on SEO because personalization will not happen for anyone until there is some interaction? Thanks,
Technical SEO | | znotes0 -
Duplicate content question...
I have a high duplicate content issue on my website. However, I'm not sure how to handle or fix this issue. I have 2 different URLs landing to the same page content. http://www.myfitstation.com/tag/vegan/ and http://www.myfitstation.com/tag/raw-food/ .In this situation, I cannot redirect one URL to the other since in the future I will probably be adding additional posts to either the "vegan" tag or the "raw food tag". What is the solution in this case? Thank you
Technical SEO | | myfitstation0 -
Removel of duplicate contant
Do to the WordPress programming I'm having a lot of duplicates that I will remove soon. What is the best way to make a decision which ones to keep and which ones to remove?
Technical SEO | | Joseph-Green-SEO0 -
How can we fix duplicate title tags like these being reported in GWT?
Hi all, I posted this in the GWT Forum on Monday and still no answers so I will try here. Our URL is http://www.ccisolutions.com
Technical SEO | | danatanseo
We have over 200 pages on our site being flagged by GWT as having
duplicate title tags. The majority of them look similar to this: Title: <a>JBL EON MusicMix 16 | Mixer | CCI Solutions</a> GWT is reporting these URLs to have all the same title: /StoreFront/product/R-JBL-MUSICMIX.prod/StoreFront/product/R-JBL-MUSICMIX.prod?Origin=Category/StoreFront/product/R-JBL-MUSICMIX.prod?Origin=Footer/StoreFront/product/R-JBL-MUSICMIX.prod?Origin=Header/StoreFront/product/R-JBL-MUSICMIX.prod?origin=../StoreFront/product/R-JBL-MUSICMIX.prod?origin=GoogleBase These are all the same page. There was a time when we used these origin codes, but we stopped using them over a year ago. We also added canonical tags to every page to prevent us from having duplicate content issues. However, these origin codes are
still showing up in GWT. Is there anything we can do to fix this problem. Do we have a technical issue with our site code and the way Google is seeing our dynamic URLs? Any suggestions on how we can fix this problem? The same is true in our report for Meta descriptions. Thanks
you,
Dana Tan0 -
Duplicate content
I'm getting an error showing that two separate pages have duplicate content. The pages are: | Help System: Domain Registration Agreement - Registrar Register4Less, Inc. http://register4less.com/faq/cache/11.html 1 27 1 Help System: Domain Registration Agreement - Register4Less Reseller (Tucows) http://register4less.com/faq/cache/7.html | These are both registration agreements, one for us (Register4Less, Inc.) as the registrar, and one for Tucows as the registrar. The pages are largely the same, but are in fact different. Is there a way to flag these pages as not being duplicate content? Thanks, Doug.
Technical SEO | | R4L0 -
Issue: Duplicate Pages Content
Hello, Following the setting up of a new campaign, SEOmoz pro says I have a duplicate page content issue. It says the follwoing are duplicates: http://www.mysite.com/ and http://www.mysite.com/index.htm This is obviously true, but is it a problem? Do I need to do anything to avoid a google penalty? The site in question is a static html site and the real page only exsists at http://www.mysite.com/index.htm but if you type in just the domain name then that brings up the same page. Please let me know what if anything I need to do. This site by the way, has had a panda 3.4 penalty a few months ago. Thanks, Colin
Technical SEO | | Colski0 -
Internal search : rel=canonical vs noindex vs robots.txt
Hi everyone, I have a website with a lot of internal search results pages indexed. I'm not asking if they should be indexed or not, I know they should not according to Google's guidelines. And they make a bunch of duplicated pages so I want to solve this problem. The thing is, if I noindex them, the site is gonna lose a non-negligible chunk of traffic : nearly 13% according to google analytics !!! I thought of blocking them in robots.txt. This solution would not keep them out of the index. But the pages appearing in GG SERPS would then look empty (no title, no description), thus their CTR would plummet and I would lose a bit of traffic too... The last idea I had was to use a rel=canonical tag pointing to the original search page (that is empty, without results), but it would probably have the same effect as noindexing them, wouldn't it ? (never tried so I'm not sure of this) Of course I did some research on the subject, but each of my finding recommanded one of the 3 methods only ! One even recommanded noindex+robots.txt block which is stupid because the noindex would then be useless... Is there somebody who can tell me which option is the best to keep this traffic ? Thanks a million
Technical SEO | | JohannCR0 -
How do I fix duplicate content with the home page?
This is probably SEO 101, but I'm unsure what to do here... Last week my weekly crawl diagnostics were off the chart because http:// was not resolving to http://www...fixed that but now it's saying I have duplicate content on: http://www.......com http://www.......com/index.php How do I fix this? Thanks in advance!
Technical SEO | | jgower0