Dev Site Was Indexed By Google
-
Two of our dev sites(subdomains) were indexed by Google. They have since been made private once we found the problem. Should we take another step to remove the subdomain through robots.txt or just let it ride out?
From what I understand, to remove the subdomain from Google we would verify the subdomain on GWT, then give the subdomain it's own robots.txt and disallow everything.
Any advice is welcome, I just wanted to discuss this before making a decision.
-
We ran into this in the past, and one thing that we (think) happened is that the links to the dev site were sent via email to several gmail accounts. We think this is how Google then indexed the site, as there were no inbound links posted anywhere.
I think that the main issue is how it's perceived by the client, and if they are freaking out about it. In that case, using an access control password to prevent anyone from coming to the site will limit anyone from seeing it.
The robot.txt file should flush it out, but yes, it takes a little bit of time.
-
I've had this happen before. In the dev subdomain, I added a robots.txt that excluded everything, verified the subdomain as its own site in GWT, then asked for that site (dev subdomain) to be removed.
I then went and used a free code monitoring service that checked for code changes of a URL once a day. I set it up to check the live site robots.txt and the robots.txt of all of the dev sites, so I'd know within 24 hours if the developers had tweaked the robots.txt.
-
Hi Tyler,
You definitely don't want to battle yourself for duplicate content. If the current sub-domains have little link juice (in links) to them, I would simply block the domain from being further indexed. If there are a couple pages that are of high value it maybe worth the time to use a 301 redirect to prevent losing any links / juice.
Using robots.txt or noindex / tags may work, but in my personal experience the easiest and most efficient way to block any indexing is simply use .htaccess / .htpasswrd this will prevent anybody without credentials from even viewing your site effectively blocking all spiders / bots and unwanted snoopers.
-
Hey Tyler,
We would follow the same protocol if in your shoes. Remove any instance of the indexed dev subdomain(s), then create your new robot.txts files for each subdomain and disavow any indexed content/links as an extra step. Also, double check and even resubmit your root domain's XML sitemap so Google can reindex your main content/links as a precautionary measure.
PS - We develop on a separate server and domain for any new work for our site or any client sites. Doing this allows us to block Google from everything.
Hope this was helpful! - Patrick
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Indexed a version of my site w/ MX record subdomain
We're doing a site audit and found "internal" links to a page in search console that appear to be from a subdomain of our site based on our MX record. We use Google Mail internally. The links ultimately redirect to our correct preferred subdomain "www", but I am concerned as to why this is happening and if it can have any negative SEO implications. Example of one of the links: Links aspmx3.googlemail.com.sullivansolarpower.com/about/solar-power-blog/daniel-sullivan/renewable-energy-and-electric-cars-are-not-political-footballs I did a site operator search, site:aspmx3.googlemail.com.sullivansolarpower.com on google and it returns several results.
Technical SEO | | SS.Digital0 -
How can I get Google to forget an https version of one page on my site?
Google mysteriously decided to index the broken, https version of one page on my company's site (we have a cert for the site, but this page is not designed to be served over https and the CSS doesn't load). The page already has many incoming links to the http version, and it has a canonical URL with http. I resubmitted it on http with webmaster tools. Is there anything else I could do?
Technical SEO | | BostonWright0 -
Google Places, Google Plus, Oh my!
Ok - So I am in the position to try and clean up the current Google places nightmare for a company. Right now there is about 3 or 4 different google places listings for them that they have no control over. So here is what I did: 1. I took control of them all by verifying via phone and confirmed all of them. 2. I suspended all the listings but 1 3. I edited the one listing to be accurate and complete.
Technical SEO | | DylanPKI
Then I waited, and waited... A month later, the old listings are still up and none of the changes to the one listing have been made. Today it gets a bit more complicated. Today I created a Google+ page for the business which seems like it may end up adding yet ANOTHER Google Places listing, is that correct? They are sending a post card to verify, but I have the page all set up ready to go and plan on tying it to the website. I am not exactly sure what my specific question is, but I am looking for any advice anyone has on the best way to go about this situation. Thank you in advance!0 -
Google using descriptions from other websites instead of site's own meta description
In the last month or so, Google has started displaying a description under links to my home page in its search results that doesn't actually come from my site. I have a meta description tag in place and for a very limited set of keywords, that description is displayed, but for the majority of results, it's displaying a description that appears on Alexa.com and a handful of other sites that seem to have copied Alexa's listing, e.g. similarsites.com. The problem is, the description from these other sites isn't particularly descriptive and mentions a service that we no longer provide. So my questions are: Why is Google doing this? Surely that's broken behaviour. How do I fix it?
Technical SEO | | antdesign0 -
When is the last time Google crawled my site
How do I tell the last time Google crawled my site. I found out it is not the "Cache" which I had thought it was.
Technical SEO | | digitalops0 -
For Google + purposes, should the author's name appear in the Meta description or title tag of my web site just as you would your key search phrase?
Relative to Cyrus Shepard's article on January 4th regarding Google's Superior SEO strategy, if I'm the primary author of all blog articles and web site content, and I have a link showing authorship going back to Google Plus, is a site wide link from the home page enough or should that show up on all blog posts etc and editorial comment pages etc? Conversely, should the author's name appear in the Meta description or title tag of my web site just as you would your key search phrase since Google appears to be trying to make a solid connection with my name, and all content?
Technical SEO | | lwnickens0 -
I am trying to block robots from indexing parts of my site..
I have a few websites that I mocked up for clients to check out my work and get a feel for the style I produce but I don't want them indexed as they have lore ipsum place holder text and not really optimized... I am in the process of optimizing them but for the time being I would like to block them. Most of my warnings and errors on my seomoz dashboard are from these sites and I was going to upload the folioing to the robot.txt file but I want to make sure this is correct: User-agent: * Disallow: /salondemo/ Disallow: /salondemo3/ Disallow: /cafedemo/ Disallow: /portfolio1/ Disallow: /portfolio2/ Disallow: /portfolio3/ Disallow: /salondemo2/ is this all i need to do? Thanks Donny
Technical SEO | | Smurkcreative0 -
Site indexing and traffic increased so dramatically overnight
Number of indexed pages jumped from 39000 to 52000 and traffic increased around 50% in my site.Note: used "site" command to check the indexed pages. I understand this is approximate.In addition, number of crawled pages/day also increased dramatically.No change in the robots.txt, sitemap, crawl errors and duplicate issues. But server migrated to different IT infrastructure. Before any celebration, want to identify the helper. Thanks.
Technical SEO | | gmk15670