Subdomain blog vs. subfolder blog in 2013
-
So I've read the posts here:
http://moz.com/community/q/subdomain-blog-vs-subfolder-blog-in-2013
and many others, Matt Cutts video, etc.
Does anyone have direct experience that its still best practice to use the sub folder? (hopefully a moz employee can chime in?)
I have a client looking to use hubspot. They are preaching with the Matt Cutts video. I'm in charge of SEO / marketing and am at odds with them now. I'd like to present the client with more info than "in my experience in the past I've seen subdirectories work."
Any help? Articles? etc?
-
I'm associated with a site that ranked fairly well. Earlier in the summer, the blog was moved from a subfolder to a subdomain for various reasons. While the reasons seemed valid at the time, the site's traffic plummeted about 1-2 weeks later. We've still been trying to analyze as many other changes were made a few weeks prior; however, the arrows are pointing to the subfolder to subdomain change which may have really caused this plague. We're now looking into moving it back to see if it will resolve the problem.
-
This does not influence my opinion about anything.
-
Google does not calculate DA
-
I have first-hand experience that merging a subdomain into a folder on a domain can have a kickass effect on your rankings.
-
-
I just tested:
and hubspot.com
both have the same DA in OSE.
I also tested support.hostgator.com and hostgator.com
those have the same DA.
-
If you got Jesse and PhD sayin' something, best go with it.
-
Well yes. I mean it's quite simple - Linking to a subdomain does not pass authority to the root domain. It's easy to test on any site you can find me that has a subdomain. Plug it into OSE and you have yourself two different DAs for that very reason.
It's something I don't see ever changing. There's a reason sub domains are treated separately in terms of incoming links; they are their own entity and I believe this will always be the case. Can't think of why it wouldn't.
-
Thanks guys. I know everyone in our industry is pro sub directories. I guess what I am looking for is irrefutable case studies / fact. Have you guys tested this post 2012? Is there any evidence from 2013 that this is still the case?
-
I second that. You use the blog to build the authority of the main domain.
-
Using a subdirectory will cause all of the potential link juice to flow to your root domain. If you go with a subdomain, the potential links gained from awesome blog content won't do your actual domain any good as far as ranking organically for your targeted keywords.
That's the short version. Subdirectories all the way (assuming this is what you're gaming at of course.)
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can I run a successful SEO campaign for a subdomain?
My company has been around for several years now and hasn't really paid much mind to SEO or search engine rankings, so now I'm an in-house marketer with moderate SEO knowledge. We're setting up an article site with our help pages and blog under a subdomain so our writers can easily post articles without having to go through developers every time, as our root domain was set up with a custom in-house CMS. Is it possible for me to run a successful SEO campaign for our article site subdomain? I get that the root domain wouldn't benefit from any SEO authority the new site obtains, but my hands are tied.
Intermediate & Advanced SEO | | teachbanzai0 -
ExampleSite.com vs ExampleSite.com.br
What would you say to a client who is concerned he'd have to run around buying his .com.??? in alot of other countries. Thanks!
Intermediate & Advanced SEO | | 945010 -
Canonical tag + HREFLANG vs NOINDEX: Redundant?
Hi, We launched our new site back in Sept 2013 and to control indexation and traffic, etc we only allowed the search engines to index single dimension pages such as just category, brand or collection but never both like category + brand, brand + collection or collection + catergory We are now opening indexing to double faceted page like category + brand and the new tag structure would be: For any other facet we're including a "noindex, follow" meta tag. 1. My question is if we're including a "noindex, follow" tag to select pages do we need to include a canonical or hreflang tag afterall? Should we include it either way for when we want to remove the "noindex"? 2. Is the x-default redundant? Thanks for any input. Cheers WMCA
Intermediate & Advanced SEO | | WMCA0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Archiving a festival website - subdomain or directory?
Hi guys I look after a festival website whose program changes year in and year out. There are a handful of mainstay events in the festival which remain each year, but there are a bunch of other events which change each year around the mainstay programming.This often results in us redoing the website each year (a frustrating experience indeed!) We don't archive our past festivals online, but I'd like to start doing so for a number of reasons 1. These past festivals have historical value - they happened, and they contribute to telling the story of the festival over the years. They can also be used as useful windows into the upcoming festival. 2. The old events (while no longer running) often get many social shares, high quality links and in some instances still drive traffic. We try out best to 301 redirect these high value pages to the new festival website, but it's not always possible to find a similar alternative (so these redirects often go to the homepage) Anyway, I've noticed some festivals archive their content into a subdirectory - i.e. www.event.com/2012 However, I'm thinking it would actually be easier for my team to archive via a subdomain like 2012.event.com - and always use the www.event.com URL for the current year's event. I'm thinking universally redirecting the content would be easier, as would cloning the site / database etc. My question is - is one approach (i.e. directory vs. subdomain) better than the other? Do I need to be mindful of using a subdomain for archival purposes? Hope this all makes sense. Many thanks!
Intermediate & Advanced SEO | | cos20300 -
Frame forwarding my blog
Hello again.. Last blog question for a while, I promise! 🙂 The annoying folk behind my website say that the only way for my blog to be at http://www.celynnenphography.co.uk/blog would be to frame forward it, because of how they are hosting, managing it etc Is this an acceptable and useful thing regarding SEO? (I want my website to benefit from my blog's content) Thanks a lot guys! Ioan
Intermediate & Advanced SEO | | IoanSaid0 -
Disabled/Accessibilty vs SEO?
Can anyone point me to resources that helps website owners balance these two issues? Or how to SEO a site meant for disabled users? or how to make an SEO'd site more accessible? Thanks!
Intermediate & Advanced SEO | | mjcarrjr0 -
Google.ca vs Google.com Ranking
I have a site I would like to rank high for particular keywords in the Google.ca searches and don't particularly care about the Google.com searches (it's a Canadian service). I have logged into Google Webmaster Tools and targeted Canada. Currently my site is ranking on the third page for my desired keywords on Google.com, but is on the 20th page for Google.ca. Previously this change happened quite quickly -- within 4 weeks -- but it doesn't seem to be taking here (12 weeks out and counting). My optimization seems to be fine since I'm ranking well on Google.com: not sure why it's not translating to Google.ca. Any help or thoughts would be appreciated.
Intermediate & Advanced SEO | | seorm0