Can Robots.txt on Root Domain override a Robots.txt on a Sub Domain?
-
We currently have beta sites on sub-domains of our own domain. We have had issues where people forget to change the Robots.txt and these non-relevant beta sites get indexed by search engines (nightmare).
We are going to move all of these beta sites to a new domain that we disallow all in the root of the domain.
If we put fully configured Robots.txt on these sub-domains (that are ready to go live and open for crawling by the search engines) is there a way for the Robots.txt in the root domain to override the Robots.txt in these sub-domains?
Apologies if this is unclear. I know we can handle this relatively easy by changing the Robots.txt in the sub-domain on going live but due to a few instances where people have forgotten I want to reduce the chance of human error!
Cheers,
Dave.
-
Hi Dave. A workflow checklist should really help with this as well. There are probably a few other items you'll catch by meeting with the others involved and getting everyone on the same page. Cheers!
-
Dave, I had exactly the same issue a month ago with being indexed on subdomains but was able to modify robots.txt in root domain swiftly enough to avoid real damage. Your main root robots.txt can override the subdomains. Simply disallow the subdomain in your robots file disallow: /subdomain url/robots.txt OR, if im not mistaken simply remove it altogether from the subdomain
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My site auto redirects http to https. This is causing redirect chains. What can I do?
I noticed that Moz flags a lot of redirect chain issues on my site. I realized that this is mostly because the site automatically redirects http to https, and when I create a new URL (when a URL changes, for example) it is automatically flagged as a chain. Example: http://www.example-link Auto directs to: https://www.example-link Which is then redirected to: https://www.example-link-changed (when the address actually changes) I don't seem to have any control over changing where the initial http redirect goes. Any advice on fixing this problem?
On-Page Optimization | | baystatemarketing0 -
Domain Authority
I made some changes to my site and many of my keywords improved dramatically. Its odd however, that the changes caused the domain authority to go down by 3 points, from 20 to 17. Is this a short term thing caused by recent changes and likely to recover?
On-Page Optimization | | KrisIrr0 -
Robot.txt file issue on wordpress site.
I m facing the issue with robot.txt file on my blog. Two weeks ago i done some development work on my blog. I just added few pages in robot file. Now my complete site seems to be blocked. I have checked and update the file and still having issue. The search result shows that "A description for this result is not available because of this site's robots.txt – learn more." Any suggestion to over come with this issue
On-Page Optimization | | Mustansar0 -
SOS - I have done a terrible mistake: How can I make it up?
Two weeks ago i changed the urls on our website, without redirecting the old ones. This has led to a dramatic drop in ranking. What should i do: redirect the old pages and keep the new ones Change the urls back to the original ones Which of these to methods will result in best ranking? Maia
On-Page Optimization | | MaiaHaaland0 -
Google ranking is HORRIBLE. Following SEOMoz suggestions and just can't climb.
First of all, the URL is stores.dhsequipment.com. In January, this online store switched from Homestead to Big Commerce. Since the store updated, we decided now is the time to update our product descriptions, URL's, title tags and meta descriptions. (For the first time, we had the ability to customize our URL's.) Product Description: I went through 2,500 products and updated the product description. I added an H1 & H2 to each description, and included pertinent information such as part numbers. Each product also received a new page title, meta description (which is usually the first line of the product description, don't know if this is bad or not) and a new URL, (which did redirect). Once I would complete a section, I would submit a new sitemap to Webmaster Tools. After a month and nothing happening, I started using SEOMoz which helped me rebuild some of my more important pages, such as the home page and main category pages such as: http://stores.dhsequipmentparts.com/stihl-ts420-parts/
On-Page Optimization | | pearldesign
http://stores.dhsequipmentparts.com/stihl-ts700-parts-stihl-ts800-parts/ I fetched these pages in Webmaster Tools after completion. However, it's been several weeks since and I'm still on page 4 or 5 in the SERPs. Just a little history on the store; this store has been in operation for more than 6 years. Previously, we ranked on page one for 75%+ of our products. My belief is because our URL's had history, probably more so than our competitors. I'm not sure what I should do. Business is super slow and we can't afford to wait much longer.0 -
I have more pages in my site map being blocked by the robot file than I have being allowed to be crawled. Is Google going to hate me for this?
Using some rules to block all pages which start with "copy-of" on my website because people have a bad habit of duplicating new product listings to create our refurbished, surplus etc. listings for those products. To avoid Google seeing these as duplicate pages I've blocked them in the robot file, but of course they are still automatically generated in our sitemap. How bad is this?
On-Page Optimization | | absoauto0 -
How can I stop google reading a certain section of text with my H1 tag?
Hey Mozzers, I'm wondering if anybody knows of a way that I can stop google reading a certain part of text within my H1 texts? My issue is that I have individual office pages on my site, but many offices are based in the same city; such as 'London'. I want to keep London within the H1 tag for user experience but I do not want it to be picked up by the search engines and start a canonical issue. I've seen some people say to use document.write or use an image. Does anybody know of a correct way of doing this? Many Thanks.
On-Page Optimization | | Lakeside0 -
Does Google respect User-agent rules in robots.txt?
We want to use an inline linking tool (LinkSmart) to cross link between a few key content types on our online news site. LinkSmart uses a bot to establish the linking. The issue: There are millions of pages on our site that we don't want LinkSmart to spider and process for cross linking. LinkSmart suggested setting a noindex tag on the pages we don't want them to process, and that we target the rule to their specific user agent. I have concerns. We don't want to inadvertently block search engine access to those millions of pages. I've seen googlebot ignore nofollow rules set at the page level. Does it ever arbitrarily obey rules that it's been directed to ignore? Can you quantify the level of risk in setting user-agent-specific nofollow tags on pages we want search engines to crawl, but that we want LinkSmart to ignore?
On-Page Optimization | | lzhao0