URL Duplicate Content Issues (Website Transition)
-
Hey guys,
I just transitioned my website and I have a question. I have built up all the link juice around my old url styles. To give you some clarity:
My old CMS rendered links like this: www.example.com/sweatbands
My new CMS renders links like this: www.example.com/sweatbands/
My new CMS's auto-sitemap also generates them with the slash on the end. Also throughout the website the CMS links to them with the slash at the end and i link to them without the slash (because it's what i am used to). I have the canonical without the slash.
Should I just 301 to the version with the slash before google crawls again? I'm worried that i'll lose all the trust and ranking i built up to the one without the slash. I rank very high for certain keywords and some pages house a large portion of our traffic. What a mess! Help!
-
Hi Paul, did you find a good way to automatically do the trailing slash redirect?
-
Paul--you may be able to automate this using the rewrite module or just a few lines of PHP. Check out the SEOmoz "how to" for 301 redirects. Scroll down to the "SEO Best Practices".
The article advises that the fastest way to make the switch is to use the PHP header function. If you aren't using PHP (like wordpress or Joomla) than look at the instructions for editing the .htaccess file.
It's a little dense, but hopefully this can save you hours of manually typing in the new URLs in a text doc
-
thank you - Apache.
-
Paul--there are some automated options to avoid rewriting the lines by hand. What kind of server are you running? Windows? Linux? Let me know and I'll try to throw a script your way.
-
Josh, Thank you. I was going in that direction but just wasn't sure. I'm gunna have to add a lot of slashes here in a minute. Anyone else have any input?
-
Paul,
I understand your situation seems messy! Good news though--this should be a relatively simple fix.
A 301 redirect by definition preserves all your link juice. IMO, you should 301 all of the "no slash" pages to the ones with slashes. This will keep your site consistent as you continue to produce content under the new CMS.
As for canonical: technically it won't make a difference, but as a best practice, I would change the canonical page to the one with the slash. This avoids calling attention to a page that will ultimately redirect. Since the new "slash" page is going to inherit all of the "no slash" link juice (via 301), it is appropriate to label it as "canonical".
Even if you were to see a slight fluctuation in your ranking, don't be alarmed--nothing will have changed in the eyes of the search engine.
In short: 301 to your heart's content and keep producing good content.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Recurring events and duplicate content
Does anyone have tips on how to work in an event system to avoid duplicate content in regards to recurring events? How do I best utilize on-page optimization?
Technical SEO | | megan.helmer0 -
What is the process for allowing someone to publish a blog post on another site? (duplicate content issue?)
I have a client who allowed a related business to use a blog post from my clients site and reposted to the related businesses site. The problem is the post was copied word for word. There is an introduction and a link back to the website but not to the post itself. I now manage the related business as well. So I have creative control over both websites as well as SEO duties. What is the best practice for this type of blog post syndication? Can the content appear on both sites?
Technical SEO | | donsilvernail0 -
SEO for a a static content website
Hi everyone, We would like to ask suggestions on how to improve our SEO for our static content help website. With the release of each new version, our company releases a new "help" page, which is created by an authoring system. This is the latest page: http://kilgray.com/memoq/2015/help-en/ I have a couple of questions: 1- The page has an index with many links that open up new subpages with content for users. It is impossible to add title tags to this subpages, as everything is held together by the mother page. So it is really hard to for users to find these subpage information when they are doing a google search. 2- We have previous "help" pages which usually rank better in google search. They also have the same structure (1 page with big index and many subpages) and no metadata. We obviously want the last version to rank better, however, we are afraid exclude them from bots search because the new version is not easy to find. These are some of the previous pages: http://kilgray.com/memoq/2014R2/help-en/ http://kilgray.com/memoq/62/help-en/ I would really appreciate suggestions! Thanks
Technical SEO | | Kilgray0 -
Duplicate content on charity website
Hi Mozers, We are working on a website for a UK charity – they are a hospice and have two distinct brands, one for their adult services and another for their children’s services. They currently have two different websites which have a large number of pages that contain identical text. We spoke with them and agreed that it would be better to combine the websites under one URL – that way a number of the duplicate pages could be reduced as they are relevant to both brands. What seamed like a good idea initially is beginning to not look so good now. We had planned to use CSS to load different style sheets for each brand – depending on the referring URL (adult / Child) the page would display the appropriate branding. This will will work well up to a point. What we can’t work out is how to style the page if it is the initial landing page – the brands are quite different and we need to get this right. It is not such an issue for the management type pages (board of trustees etc) as they govern both identities. The issue is the donation, fundraising pages – they need to be found, and we are concerned that users will be confused if one of those pages is the initial landing page and they are served the wrong brand. We have thought of making one page the main page and using rel canonical on the other one, but that will affect its ability to be found in the search engines. Really not sure what the best way to move forward would be, any suggestions / guidance would be much appreciated. Thanks Fraser .
Technical SEO | | fraserhannah0 -
Value in Consolidating Similar Sites / Duplicate Content for Different URLs
We have 5 ecommerce sites: one company site with all products, and then four product-specific sites with relevant URL titles and products divided up between them (www.companysite.com, www.product1.com, www.product2.com, etc). We're thinking of consolidating the smaller sites into our most successful site (www.product1.com) in order to save management time and money, even though I hate to lose the product-specific URLs in search results. Is this a wise move? If we proceed, all of the products will be available on both our company site and our most successful site (www.company.com & www.product1.com). This would unfortunately give us two sites of duplicate content, since the products will have the same pictures, descriptions, etc. The only difference would be the URL. Would we face penalties from Google, even though it would make sense to continue to carry our products on our company site?
Technical SEO | | versare0 -
Duplicate Content Problem!
Hi folks, I have a quite awkward problem. Since a few weeks a get a huge amount of "duplicate content errors" in my MOZ crawl reports. After a while of looking for the error I thought of the domains I've bought additionally. So I went to Google and typed in site:myotherdomains.com The results was as I expected that my original website got indexed with my new domains aswell. That means: For example my original website was index with www.domain.com/aboutus - Then I bought some additional domains which are pointing on my / folder. What happened is that I also get listed with: www.mynewdomains.com/com How can I fix that? I tried a normal domain redirect but it seems as this doesn't help as when I am visiting www.mynewdomains.com the domain doesnt change in my browser to www.myoriginaldomain.com but stays with it ... I was busy the whole day to find a solution but I am kinda desperate now. If somebody could give me advice it would be much appreciated. Mike
Technical SEO | | KillAccountPlease0 -
Development Website Duplicate Content Issue
Hi, We launched a client's website around 7th January 2013 (http://rollerbannerscheap.co.uk), we originally constructed the website on a development domain (http://dev.rollerbannerscheap.co.uk) which was active for around 6-8 months (the dev site was unblocked from search engines for the first 3-4 months, but then blocked again) before we migrated dev --> live. In late Jan 2013 changed the robots.txt file to allow search engines to index the website. A week later I accidentally logged into the DEV website and also changed the robots.txt file to allow the search engines to index it. This obviously caused a duplicate content issue as both sites were identical. I realised what I had done a couple of days later and blocked the dev site from the search engines with the robots.txt file. Most of the pages from the dev site had been de-indexed from Google apart from 3, the home page (dev.rollerbannerscheap.co.uk, and two blog pages). The live site has 184 pages indexed in Google. So I thought the last 3 dev pages would disappear after a few weeks. I checked back late February and the 3 dev site pages were still indexed in Google. I decided to 301 redirect the dev site to the live site to tell Google to rank the live site and to ignore the dev site content. I also checked the robots.txt file on the dev site and this was blocking search engines too. But still the dev site is being found in Google wherever the live site should be found. When I do find the dev site in Google it displays this; Roller Banners Cheap » admin <cite>dev.rollerbannerscheap.co.uk/</cite><a id="srsl_0" class="pplsrsla" tabindex="0" data-ved="0CEQQ5hkwAA" data-url="http://dev.rollerbannerscheap.co.uk/" data-title="Roller Banners Cheap » admin" data-sli="srsl_0" data-ci="srslc_0" data-vli="srslcl_0" data-slg="webres"></a>A description for this result is not available because of this site's robots.txt – learn more.This is really affecting our clients SEO plan and we can't seem to remove the dev site or rank the live site in Google.Please can anyone help?
Technical SEO | | SO_UK0 -
Duplicate Page Content for sorted archives?
Experienced backend dev, but SEO newbie here 🙂 When SEOmoz crawls my site, I get notified of DPC errors on some list/archive sorted pages (appending ?sort=X to the url). The pages all have rel=canonical to the archive home. Some of the pages are shorter (have only one or two entries). Is there a way to resolve this error? Perhaps add rel=nofollow to the sorting menu? Or perhaps find a method that utilizes a non-link navigation method to sort / switch sorted pages? No issues with duplicate content are showing up on google webmaster tools. Thanks for your help!
Technical SEO | | jwondrusch0