What is the best method to block a sub-domain, e.g. staging.domain.com/ from getting indexed?
-
Now that Google considers subdomains as part of the TLD I'm a little leery of testing robots.txt with something like:
staging.domain.com
User-agent: *
Disallow: /in fear it might get the www.domain.com blocked as well. Has anyone had any success using robots.txt to block sub-domains? I know I could add a meta robots tag to the staging.domain.com pages but that would require a lot more work.
-
Just make sure that when/if you copy over the staging site to the live domain that you don't copy over the robots.txt, htaccess, or whatever means you use to block that site from being indexed and thus have your shiny new site be blocked.
-
I agree. The name of your subdomain being "staging" didn't register at all with me until Matt brought it up. I was offering a generic response to the subdomain question whereas I believe Matt focused on how to handle a staging site. Interesting viewpoint.
-
Matt/Ryan-
Great discussion, thanks for the input. The staging.domain.com is just one of the domains we don't want indexed. Some of them still need to be accessed by the public, some like staging could be restricted to specific IPs.
I realize after your discussion I probably should have used a different example of a sub-domain. On the other hand it might not have sparked the discussion so maybe it was a good example
-
.htaccess files can be placed at any directory level of a site so you can do it for just the subdomain or even just a directory of a domain.
-
Staging URL's are typically only used for testing so rather than do a deny I would recommend using a specific ALLOW for only the IP addresses that should be allowed access.
I would imagine you don't want it indexed because you don't want the rest of the world knowing about it.
You can also use HTACCESS to use username/passwords. It is simple but you can give that to clients if that is a concern/need.
-
Correct.
-
Toren, I would not recommend that solution. There is nothing to prevent Googlebot from crawling your site via almost any IP. If you found 100 IPs used by the crawler and blocked them all, there is nothing to stop the crawler from using IP #101 next month. Once the subdomain's content is located and indexed, it will be a headache fixing the issue.
The best solution is always going to be a noindex meta tag on the pages you do not wish to be indexed. If that method is too much work or otherwise undesirable, you can use the robots.txt solution. There is no circumstance I can imagine where you would modify your htaccess file to block googlebot.
-
Hi Matt.
Perhaps I misunderstood the question but I believe Toren only wishes to prevent the subdomain from being indexed. If you restrict subdomain access by IP it would prevent visitors from accessing the content which I don't believe is the goal.
-
Interesting, hadn't thought of using htaccess to block Googlebot.Thanks for the suggestion.
-
Thanks Ryan. So you don't see any issues with de-indexing the main site if I created a second robots.txt file, e.g.
http://staging.domin.com/robots.txt
User-agent: *
Disallow: /That was my initial thought but when Google announced they consider sub-domains part of the TLD I was afraid it might affect the htp://www.domain.com versions of the pages. So you're saying the subdomain is basically treated like a folder you block on the primary domain?
-
Use an .htaccess file to only allow from certain ip addresses or ranges.
Here is an article describing how: http://www.kirupa.com/html5/htaccess_tricks.htm
-
What is the best method to block a sub-domain, e.g. staging.domain.com/ from getting indexed?
Place a robots.txt file in the root of the subdomain.
User-agent: *
Disallow: /This method will block the subdomain while leaving your primary domain unaffected.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What's the best way to pass link juice to a page on another domain?
I'm working with a non-profit, and their donation form software forces them to host their donation pages on a different domain. I want to attempt to get their donation page to appear in their sitelinks in Google (under the main website's entry), but it seems like the organization's donation forms are at a disadvantage because they're not actually hosted on that site. I know that no matter what I do, there's no way to "force" a sitelink to appear the way I want it, but... I was trying to think if there's a way I can work around this. Do you think 1) creating a url like orgname.org/donate and having that be a 301 redirect to the donation form, and 2) using the /donate redirect all over the site (instead of linking directly to the form) would help? Are there alternatives other folks recommend?
Technical SEO | | clefevre0 -
Best way to get SEO friendly URLSs on huge old website
Hi folks Hope someone may be able to help wit this conundrum: A client site runs on old tech (IIS6) and has circa 300,000 pages indexed in Google. Most pages are dynamic with a horrible URL structure such as http://www.domain.com/search/results.aspx?ida=19191&idb=56&idc=2888 and I have been trying to implement rewrites + redirects to get clean URLs and remove some of the duplication that exists, using the IIRF Isapi filter: http://iirf.codeplex.com/ I manage to get a large sample of URLS re-writing and redirecting (on a staging version of the site), but the site then slows to crawl. To imple,ent all URLs woudl be 10x the volume of config. I am starting to wonder if there is a better way: Upgrade to Win 2008 / IIS 7 and use the better URL rewrite functionality included? Rebuild the site entirely (preferably on PHP with a decent URL structure) Accept that the URLS can't be made friendly on a site this size and focus on other aspects Persevere with the IIRF filter config, and hope that the config loads into memory and the site runs at a reasonable speed when live None of the options are great as they either involve lots of work/cost of they involve keeping a site which performs well but could do so much better, with poor URLs. Any thoughts from the great minds in the SEOmoz community appreciated! Cheers Simon
Technical SEO | | SCL-SEO1 -
Page and domain authority 1/100
I have a fairly new domain less than year old showing page and domain authorities of 1/100 ose also will not perform a backlinks check. I have only just started to build back links so maybe I should be waiting a while yet? The site is indexed by google but I cannot see it in the top 100 how can I find out exactly where it is in the serps for example 1001 without manually crawling through the pages.
Technical SEO | | dynamic080 -
Best practices for migrating an html sitemap? Or just get rid of it all together?
We are migrating a very large site to a new CMS and I'm trying to determine the best way to handle all the links (~15k) in our html sitemap. The developers don't see the purpose of using an html sitemap anymore and I have yet to come up with a good reason why we should migrate rather than just get rid of the sitemap since it is not very useful to users. The html sitemap was created about 6 years ago when page rank sculpting was a high priority. Currently, since we already have an XML sitemap, I'm not sure that there's really a need for a html sitemap, other than to maintain all the internal links. How valuable are the internal links found in an html sitemap? And will it be a problem if we remove these from our link profile? 15,000 links sounds significant, but they only account for less than .5% of our internal links. What do all you think?
Technical SEO | | BostonWright0 -
Tutorial For Moving Blogger Blog From Sub-Domain to Sub-Directory
Does anyone know where I can find a tutorial for moving a blogger.com (blogspot) blog that's currently hosted on a subdomain (i.e. blog.mysite.com) to a subdirectory (i.e. mysite.com/blog) with the current version of blogger? I'm working on transferring my blogger blogs over to wordpress, and to do so without losing link juice or traffic, this is one of the steps I have to take. There's plenty of tutorials that address moving from blogspot.mysite.com to wordpress and I've even found a few that address moving from blog.mysite.com (hosted on blogger) to a root domain mysite.com. However, I need to move from blog.mysite.com (blogger) to mysite.com/blog/ - subdirectory (wordpress). Anyone who knows how to do this or can point me in the right direction?? Thanks.
Technical SEO | | ChaseH0 -
Duplicate titles / canonical / Drupal
I have a site where there are several duplicate titles, looks like mainly based on a parameterized vs. non-parameterized version of the page. I have what appears to be a proper canonical tag, but webmaster still complains of both duplicate titles & meta descriptions. A good example (taken out of webmaster report for http://igottadrive.com) is: /driving-tips/mirror-setup-and-use /driving-tips/mirror-setup-and-use?inline=true If I look at the page (in either case) there appears to be a correct canonical tag pointing to the base case. However, for some reason google is either ignoring the canonical or its not properly done. Any suggestions would be greatly appreciated.
Technical SEO | | uwaim20120 -
A rel="canonical" to www.homepage.com/home.aspx Hurts my Rank?
Hello, The CMS that I use makes 3 versions of the homepage:
Technical SEO | | EvolveCreative
www.homepage.com/home.aspx homepage.com homepage.com/default.aspx By default the CMS is set to rel=canonical all versions to the www.homepage.com/home.aspx version. If someone were to link to a website they most likely aren't going to link to www.homepage.com/home.aspx, they'll link to www.homepage.com which makes that link juice flow through the canonical to www.homepage.com/home.aspx right? Why make that extra loop at all? Wouldn't that be splitting the juice? I know 301's loose 1-5 % juice, but not sure about canonical. I assume it works the same way? Thanks! http://yoursiteroot/0 -
Correct 301 of domain inclusive "/"
Do I have to redirect "/" in the domain by default? My root domain is e.g. petra.at
Technical SEO | | petrakraft
--> I redirect via 301 to www.petra.at Do I have to do that with petra.at/ and www.petra.at/, too?0