Root domain change - how do we best handle existing backlinks from our own content platforms on youtube, etc?
-
Hi, we have recently changed our brand name after 7 years and have changed our root domain to match (33shake.com since 2012, now 33 fuel.com)
The site is the same (no migration to a new one) as there were no other business changes apart from the name/domain.
301 redirects are looking after all former 33shake.com links, which are now being redirected to their new 33fuel.com equivalents (slugs are the same in 99% of cases).
My question is:
We have a lot of backlinks for our old domain (33shake.com) on our own content via our YouTube channel (100+ videos) and also our podcast (64 episodes in, broadcast on 10 platforms).
For maximum SEO benefit as we continue to restore domain authority, etc to 33fuel.com, are we best to leave these historical backlinks pointed at the old domain and let the redirects pick them up when people click? Or are we better off swapping all of these old historical backlinks so they point directly to the new domain?
Any advice would be greatly appreciated, this is quite a maze we are now picking our way through!
Warren
-
Thanks so much for the clear and helpful response. Amending the links is easy enough for us to do, we'll get to it.
-
You're better off amending the links if at all possible. 301 redirects are great, but they can break down for many reasons. One reason a 301 can be refused equity flow, is if the old content is too 'dissimilar' to the new content (think Boolean string similarity, not 'what humans think'). If the old content and new content are 60% similar, don't expect 100% of the authority to go through. If the old and new content are only 20% similar, barely any SEO authority (if any) will translate across
This is to combat SEO-authority sculpting through redirects. If webmasters decided they voluntarily, editorially wanted to link to one old page, but the new page is barely similar to that old resource - are the old hyperlinks still 'valid' in terms of contributing SEO authority? In many cases, no they are not (the webmasters or editors, may not have chosen to link to the new content - even though they did link to the old content). Past a certain point, content has to re-prove itself
Amending the hyperlinks circumvents that judgement, though links do also decay over time. In general, I have found link amends to be superior to 301 redirects, 90% of the time
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Do bad links to a sub-domain which redirects to our primary domain pass link juice and hurt rankings?
Sometime in the distant past there existed a blog.domain.com for domain.com. This was before we started work for domain.com. During the process of optimizing domain.com we decided to 301 blog.domain.com to www.domain.com. Recently, we discovered that blog.domain.com actually has a lot of bad links pointing towards it. By a lot I mean, 5000+. I am curious to hear people's opinions on the following: 1. Are they passing bad link juice? 2. does Google consider links to a sub-domain being passed through a 301 to be bad links to our primary domain? 3. The best approach to having these links removed?
Technical SEO | | Shredward0 -
Link building to ROOT domain OR to WWW.?
Hello, Here I come with one more 'sensitive' question, hoping that you SEO gurus could give some input on. My title explains pretty much what I'm wondering about, but let me give you some short data. I have from .htaccess file set that all traffic goes to WWW.mydomain.com. I know that it is 'better' for search engines not to have duplicate destinations as that can give decreased page rank because of 'double content'. As for search engines http://domain.com and http://www.domain.com is totally different domains. Now wondering one thing: If I build a several thousands of backlinks at various sources, blogs, directories, web sites etc etc. - shall I link to domain ROOT or shall I include WWW prefix? When looking at Moz Keyword Analysis for my domains, I can see a block about 'Linking Root Domains' and 'Page Linking Root Domains'. But no 'www' variable (sub-domain) there. As I have already set canonical part so everything shows with WWW on my website - what logic shall I use when building backlinks? How will search engine translate the link juice in regards I wrote above? Thanks in advance, great forum!
Technical SEO | | SEOisSEO0 -
Can Page Content & Description Have Same Content?
I'm studying my crawl report and there are several warnings regarding missing meta descriptions. My website is built in WordPress and part of the site is a blog. Several of these missing description warnings are regarding blog posts and I was wondering if I am able to copy the first few lines of content of each of the posts to put in the meta description, or would that be considered duplicate content? Also, there are a few warnings that relate to blog index pages, e.g. http://www.iainmoran.com/2013/02/ - I don't know if I can even add a description of these as I think they are dynamically created? While on the subject of duplicate content, if I had a sidebar with information on several of the pages (same info) while the content would be coming from a WP Widget, would this still be considered duplicate content and would Google penalise me for it? Would really appreciate some thoughts on this,please. Thanks, Iain.
Technical SEO | | iainmoran0 -
Domain Authority why is change
Hey seomoz friends!
Technical SEO | | petrospan
I have a question and if you have some links to read about it, bring it on!! What variables changes that measurement?0 -
What's the best way to eliminate duplicate page content caused by blog archives?
I (obviously) can't delete the archived pages regardless of how much traffic they do/don't receive. Would you recommend a meta robot or robot.txt file? I'm not sure I'll have access to the root directory so I could be stuck with utilizing a meta robot, correct? Any other suggestions to alleviate this pesky duplicate page content issue?
Technical SEO | | ICM0 -
Changing preferred domain
My company has an international website, and because of a technical issue visitors in one of our main countries cannot visits the "www" version of our site. Currently, the www version is our preferred domain - and the non www redirects to that page. To solve this problem, I was thinking of proposing the following and would greatly appreciate any feedback! (Note: If you answered my www vs. non www question, thanks - this is a follow up) 1. Set non www site as the preferred version 2. Redirect from www to non www 3. Contact our current links and ask them to change to without “www” 4. Change canonical URLs to without “www”
Technical SEO | | theLotter0 -
Updating rss feed times without changing content
my question is like the title reads If I have an rss feed in an xml file and from time to time I update the pubdate and time. Will this have a positive effect on my website in terms of the rss aggregators coming to my site thinking that it was recently updated and creating links to these pages or will they be able to determine that there is nothing new by comparing it to the old page that they may have stored. thus doing nothing or maybe even hurting the website.
Technical SEO | | mickey112 -
What's the best way to deal with an entire existing site moving from http to https?
I have a client that just switched their entire site from the standard unsecure (http) to secure (https) because of over-zealous compliance issues for protecting personal information in the health care realm. They currently have the server setup to 302 redirect from the http version of a URL to the https version. My first inclination was to have them simply update that to a 301 and be done with it, but I'd prefer not to have to 301 every URL on the site. I know that putting a rel="canonical" tag on every page that refers to the http version of the URL is a best practice (http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=139394), but should I leave the 302 redirects or update them to 301's. Something seems off to me about the search engines visiting an http page, getting 301 redirected to an https page and then being told by the canonical tag that it's actually the URL they were just 301 redirected from.
Technical SEO | | JasonCooper0