Migrating website to new CMS and to https://
-
Hi,
We are migrating an old website to a new one built in Wordpress soon.
We also added an SSL to change to https://
Most of the url's stay the same. Can we just migrate from http to https on server level, and for the url's that do change just set a 301 redirect? Or are there other things we should take into account?
-
Thanks guys, some helpful tips !
-
Hi Mat_C,
For the HTTPS update, you'll want to ensure that all HTTP URL requests are redirected to the new HTTPS versions. Usually this can be done with a global redirect (often it's easiest to do this via your .htaccess file).
However, because you're migrating from what sounds like a HTML site or non-WordPress CMS, your page paths, file extensions, etc will also be changing - so you'll want to map redirects from the old URLs to their new WordPress counterparts more granularly. This should only be necessary for URLs that have external links (links on other sites) referring to them or URLs that users will have stored in bookmarks or will find in advertisements or other referral sources.
We usually look for URLs that have external links using a tool like Moz's Link Explorer and URLs that have Direct or Referral source traffic via Google Analytics / similar.
When you know what old URLs needs to be redirected to new URLs, there are WordPress plugins (like Redirection) that can make this easier than adding an exhaustive list to your .htaccess file or similar.
Best of Luck,
Mike -
Hi Mat_C
It's great that you are migrating and switching to https.
Regarding the switching on a server level, you can perfectly do it that way. Just remember that you need to address every URL that need to be, redirected to the new site, canonicalized or 404d.
There are a lot of issues to consider take a look in these resources about migrations:
The Website Migration Guide: SEO Strategy, Process, & Checklist - Moz Blog The Ultimate SEO Guide for Successful Web Migrations at #DigitalOlympus - AleydaSolis Migration Best Practices - SMX London 2018 <- backedup by JohnMu in this tweetHope it helps.
Best luck.
GR
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Website indexing issues
My website is being indexed with both https - https with www. and no leader at all. example. https//www.example.com and https//example.com and example.com 3 different versions are being indexed. How would I begin resolving this? Hosting?
Technical SEO | | DigitalRipples0 -
Home Page Being Indexed / Referral URLs /
I have a few questions related to home page URLs being indexed, canonicalization, and GA reporting... 1. I can view the home page by typing in domain.com , domain.com/ and domain.com/index.htm There are no redirects and it's canonicalized to point to domain.com/index.htm -- how important is it to have redirects? I don't want unnecessary redirects or canonical tags, but I noticed the trailing slash can sometimes be typed in manually on other pages, sometimes not. 2. When I do a site search (site:domain.com), sometimes the HP shows up as "domain.com/", never "domain.com/index.htm" or "domain.com", and sometimes the HP doesn't show up period. This seems to change several times a day, sometimes within 15 minutes. I have no idea what is causing it and I don't know if it has anything to do with #1. In a perfect world, I would ask for the /index.htm to be dropped and redirected to .com/, and the canonical to point to .com/ 3. I've noticed in GA I see / , /index.htm, and a weird Google referral URL (/index.htm?referrer=https://www.google.com/) all showing up as top pages. I think the / and /index.htm is because I haven't setup a default URL in GA, but I'm not sure what would cause the referrer. I tracked back when the referrer URL started to show up in the top pages, and it was right around the time they moved over to https://, so I'm not sure what the best option is to remove that. I know this is a lot - I appreciate any insight anyone can provide.
Technical SEO | | DigMS0 -
Merge 2 websites into one, using a non-existing, new domain.
I need to merge https://www.WebsiteA.com and https://www.WebsiteB.com to a fresh new domain (with no content) https://www.WebsiteC.com. I want to do it the best way to keep existing SEO juice. Website A is the companies home page and built with Wordpress Website B is the company product page and built with Wordpress Website C will be the new site containing both website A and B, utilizing Wordpress also. What is the best way to do this? I have research a lot and keep hitting walls on how to do it. It's a little trickier because it's two different domains going to a brand new domain. Thanks
Technical SEO | | jarydcat10 -
Drupal, http/https, canonicals and Google Search Console
I’m fairly new in an in-house role and am currently rooting around our Drupal website to improve it as a whole. Right now on my radar is our use of http / https, canonicals, and our use of Google Search Console. Initial issues noticed: We serve http and https versions of all our pages Our canonical tags just refer back to the URL it sits on (apparently a default Drupal thing, which is not much use) We don’t actually have https properties added in Search Console/GA I’ve spoken with our IT agency who migrated our old site to the current site, who have recommended forcing all pages to https and setting canonicals to all https pages, which is fine in theory, but I don’t think it’s as simple as this, right? An old Moz post I found talked about running into issues with images/CSS/javascript referencing http – is there anything else to consider, especially from an SEO perspective? I’m assuming that the appropriate certificates are in place, as the secure version of the site works perfectly well. And on the last point – am I safe to assume we have just never tracked any traffic for the secure version of the site? 😞 Thanks John
Technical SEO | | joberts0 -
When/where is it better to NOFOLLOW
Hello, I have been away from SEO for a while and boy, how things changed... I have a question/concern about implementing nofollows... Should I nofollow repeated internal links?
Technical SEO | | WIDE16
Should I nofollow internal links at all?
What if a page has too many internal links between top navigation with drop down menus, left column and footer links? OR Should I only use nofollows for outbound links? I am pretty confused and would love some clarification... Thank you very much, Koki0 -
Https Cached Site
Hi there, I recently switch my site to a new ecommerce platform which hosts the SSL certificate on their end so my site no longer has the HTTPS status unless a user is going through the checkout. Google has cached the HTTPS version of the site so in search it comes up sometimes which leads to a nasty warning that the site may not be what they are looking for. Is there a way to tell google NOT to look at the https version of the site anymore? Thanks! Bianca
Technical SEO | | TheBatesMillStore0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0