Does link juice pass along the URL or the folders? 10yr old PR 6 site
-
We have a website that is ~10yrs old and a PR 6. It has a bunch of legitimate links from .edu and .gov sites. Until now the owner has never blogged or added much content to the site. We have suggested that to grow his traffic organically he should add a worpress blog and get agressive with his content.
The IT guy is concerned about putting a wordpress blog on the same server as the main site because of security issues with WP. They have a bunch of credit card info on file.
So, would it be better to just put the blog on a subdomain like blog.mysite.com OR host the blog on another server but have the URL structure be mysite.com/blog?
I have tried to pass as much juice as possible.
Any ideas?
-
This is very helpful information! I believe this is what the admin had proposed. I just wanted to double check with you guys.
I will have to check into the cc info. I am not sure exactly what they have.
Thanks!
-
hmmmm..... yeah I am not sure. I will check into that.
-
The Reverse Proxy capabilities of both Apache and IIS are designed to do exactly what you're trying to do, Jason. A reverse proxy allows you to host the WordPress installation on any server, then proxy it so it shows to the users as served from yourdomain.com/blog.
You definitely want the new blog to sit at yoursite.com/blog if you want it to help the ranking value of the primary site.
Reverse proxies are not trivial to set up, but they're not that difficult for an experienced system administrator - especially in this case as you are building the WordPress blog from scratch (far fewer redirection hassles)
As EGOL notes though - if you have actual cc data stored, you better make sure it meets compliance whether you do the revers proxy or not. If you just mean you have PIO (Personally Identifiable Information) like name, address etc on that server, then a reverse proxy can help keep potential WordPress security issues from compromising that.
Here's a Moz blog post/infographic on reverse proxies as a primer.
Hope that helps?
Paul
-
Why do they have CC info on file? Are they PCI compliant?
I would get rid of the CC data or put it in the hands of a very secure service provider.
I would do that for security and so that I could place the blog in a folder on the primary domain.
-
If you can put the blog in a subdirectory such as www.mysite.com/blog, then that would be ideal because the link juice will be preserved on your site. If you put the blog in a subdomain like blog.mysite.com, then the search engines consider them to be two separate sites and thus the link juice is split between the two sites.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Question regarding Site and URL structure + Faceted Navigation (Endeca)
We are currently implementing the SEO module for Endeca faceted navigation. Our development team has proposed URLs to be structured in this way: Main category example: https://www.pens.com/c/pens-and-writing/ As soon as a facet is selected, for example "blue ink" - The URL path would change to https://www.pens.com/m/pens-and-writing/blue-ink/_/Nvalue (the "N" value is a unique identifier generated by Endeca that determines what products from the catalog are served as a match for the selected facet and is the same every time that facet is selected, it is not unique per user). My gut instinct says that this change from "/c/" to "/m/" might be very problematic in terms of search engines understanding that /m/pens-and-writing/blue-ink/ as part of the /c/pens-and-writing/ category. Wouldn't this also potentially pose a problem for the flow of internal link equity? Has anyone ever seen a successful implementation using this methodology?
Intermediate & Advanced SEO | | danatanseo0 -
Switching from Http to Https, but what about images and image link juice?
Hi Ya'll. I'm transitioning our http version website to https. Important question: Do images have to have 301 redirects? If so, how and where? Please send me a link or explain best practices. Best, Shawn
Intermediate & Advanced SEO | | Shawn1241 -
My Site name changed. Why does sometimes new name, sometimes old name show in Google.
My website name has changed in the title, but only shows up sometimes in the SERPs. What can I do to ensure the new name is the name that always shows up? It's been a month since the change and we have submitted a new sitemap. Here's one example: http://www.building.govt.nz/blc-building-act. In Google (for New Zealand building code) it shows up as Building Act - Department of Building and Housing. Any ideas?
Intermediate & Advanced SEO | | DanielleNZ0 -
Website Re-Launch - New URLS / Old URL WMT
Hello... We recently re-launched website with a new CMS (Magento). We kept the same domain name, however most of the structure changed. We were diligent about inputting the 301 redirects. The domain is over 15 years old and has tons of link equity and history. Today marks 27 days since launch...And Google Webmaster Tools showed me a recently detected (dated two days ago) URL from the old structure. Our natural search traffic has take a slow dive since launch...Any thoughts? Some background info: The old site did not have a sitemap.xml. The relaunched site does. Thanks!
Intermediate & Advanced SEO | | 19prince0 -
Getting Rid Of Spammy 301 Links From An Old Site
A relatively new site I'm working on has been hit really hard by Panda, due to over optimization of 301 external links which include exact keyword phrases, from an old site. Prior to the Panda update, all of these 301 redirects worked like a charm, but now all of these 301's from the old url are killing the new site, because all the hyper-text links include exact keyword matches. A couple weeks ago, I took the old site completely down, and removed the htaccess file, removing the 301's and in effect breaking all of these bad links. Consequently, if one were to type this old url, you'd be directed to the domain registrar, and not redirected to the new site. My hope is to eliminate most of the bad links, that are mostly on spammy sites, that aren't worth linking to. My thought is these links would eventually disappear from G. My concern is that this might not work, because G won't re-index these links, because once they're indexed by G, they'll be there forever. My fear is causing me to conclude I should hedge my bets, and just disavow these sites using the disavow tool in WMT. IMO, the disavow tool is an action of last resort, because I don't want to call attention to myself, since this site doesn't have a manual penalty inflected on it. Any opinions or advise would be greatly appreciated.
Intermediate & Advanced SEO | | alrockn0 -
On site links triggering anchor text algorithmic penatly?
I'm trying to figure out why a drop in ranking occurred and think it may be related to an increase in on site links. I've attached images of the SEO moz report showing a jump in links from a few hundred to around 15,000 within the space of a week. I think this may be due to some on site work I did when I created categories (I use wordpress) for a large number of cities and towns in the UK. I soon realised I'd run into duplicate content issues and removed all these categories within a few days. As I added categories I also ran into 'too many on page links' warnings as each category I added created a new link and I ended up with hundreds on each page. If you look at the analytics reports I suffered a huge drop in rankings on the 10th March and think this could be due to an on site anchor text problem that was caused by adding the categories and in turn creating many on site links. SEO moz found these links on the 11th and 25th Feb but my guess is that Google found them around at the same time but if these links are the problem then why didn't my rankings drop until the 10th March? Surely they would have dropped sooner? Would this cause a drop in rankings? I've recieved an email from google saying that no manual penalty was applied to the site after I submitted a reconsideration request. Therefore it must be some kind of algorithmic penalty. Could this be the problem and if not what else should I look at. My baclink profile appears to be okay and I've been careful to vary my anchor text with inbound link building. I'm at a loss as to what to do next. Any help will be much appreciated! UXsMLYS.png Ov9AOs8.png
Intermediate & Advanced SEO | | SamCUK0 -
Best way to consolidate link juice
I've got a conundrum I would appreciate your thoughts on. I have a main container page listing a group of products, linking out to individual product pages. The problem I have is the all the product pages target exactly the same keywords as the main product page listing all the products. Initially all my product pages were ranking much higher then the container page, as there was little individual text on the container page, and it was being hit with a duplicate content penality I believe. To get round this, on the container page, I have incorporated a chunk of text from each product listed on the page. However, that now means "most" of the content on an individual product page is also now on the container page - therefore I am worried that i will get a duplicate content penality on the product pages, as the same content (or most of it) is on the container page. Effectively I want to consolidate the link juice of the product pages back to the container page, but i am not sure how best to do this. Would it be wise to rel=canonical all the product pages back to the container page? Rel=nofollow all the links to the product pages? - or possibly some other method? Thanks
Intermediate & Advanced SEO | | James770 -
Robots.txt: Link Juice vs. Crawl Budget vs. Content 'Depth'
I run a quality vertical search engine. About 6 months ago we had a problem with our sitemaps, which resulted in most of our pages getting tossed out of Google's index. As part of the response, we put a bunch of robots.txt restrictions in place in our search results to prevent Google from crawling through pagination links and other parameter based variants of our results (sort order, etc). The idea was to 'preserve crawl budget' in order to speed the rate at which Google could get our millions of pages back in the index by focusing attention/resources on the right pages. The pages are back in the index now (and have been for a while), and the restrictions have stayed in place since that time. But, in doing a little SEOMoz reading this morning, I came to wonder whether that approach may now be harming us... http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo
Intermediate & Advanced SEO | | kurus
http://www.seomoz.org/blog/serious-robotstxt-misuse-high-impact-solutions Specifically, I'm concerned that a) we're blocking the flow of link juice and that b) by preventing Google from crawling the full depth of our search results (i.e. pages >1), we may be making our site wrongfully look 'thin'. With respect to b), we've been hit by Panda and have been implementing plenty of changes to improve engagement, eliminate inadvertently low quality pages, etc, but we have yet to find 'the fix'... Thoughts? Kurus0