Dev Site Was Indexed By Google
-
Two of our dev sites(subdomains) were indexed by Google. They have since been made private once we found the problem. Should we take another step to remove the subdomain through robots.txt or just let it ride out?
From what I understand, to remove the subdomain from Google we would verify the subdomain on GWT, then give the subdomain it's own robots.txt and disallow everything.
Any advice is welcome, I just wanted to discuss this before making a decision.
-
We ran into this in the past, and one thing that we (think) happened is that the links to the dev site were sent via email to several gmail accounts. We think this is how Google then indexed the site, as there were no inbound links posted anywhere.
I think that the main issue is how it's perceived by the client, and if they are freaking out about it. In that case, using an access control password to prevent anyone from coming to the site will limit anyone from seeing it.
The robot.txt file should flush it out, but yes, it takes a little bit of time.
-
I've had this happen before. In the dev subdomain, I added a robots.txt that excluded everything, verified the subdomain as its own site in GWT, then asked for that site (dev subdomain) to be removed.
I then went and used a free code monitoring service that checked for code changes of a URL once a day. I set it up to check the live site robots.txt and the robots.txt of all of the dev sites, so I'd know within 24 hours if the developers had tweaked the robots.txt.
-
Hi Tyler,
You definitely don't want to battle yourself for duplicate content. If the current sub-domains have little link juice (in links) to them, I would simply block the domain from being further indexed. If there are a couple pages that are of high value it maybe worth the time to use a 301 redirect to prevent losing any links / juice.
Using robots.txt or noindex / tags may work, but in my personal experience the easiest and most efficient way to block any indexing is simply use .htaccess / .htpasswrd this will prevent anybody without credentials from even viewing your site effectively blocking all spiders / bots and unwanted snoopers.
-
Hey Tyler,
We would follow the same protocol if in your shoes. Remove any instance of the indexed dev subdomain(s), then create your new robot.txts files for each subdomain and disavow any indexed content/links as an extra step. Also, double check and even resubmit your root domain's XML sitemap so Google can reindex your main content/links as a precautionary measure.
PS - We develop on a separate server and domain for any new work for our site or any client sites. Doing this allows us to block Google from everything.
Hope this was helpful! - Patrick
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Search Console Indexed Page Count vs Site:Search Operator page count
We launched a new site and Google Search Console is showing 39 pages have been indexed. When I perform a Site:myurl.com search I see over 100 pages that appear to be indexed. Which is correct and why is there a discrepancy? Also, Search Console Page Index count started at 39 pages on 5/21 and has not increased even though we have hundreds of pages to index. But I do see more results each week from Site:psglearning.com My site is https://wwww.psglearning.com
Technical SEO | | pdowling0 -
Webpages & Images Index Graph Gone Down Badly in Google Search Console Why?
Hello All, What is going on with Sitemap Index Status in Google Search Console :- Webpages Submitted - 35000 index showing 21000 whereas previously approx 34500 were index. Images Submitted - 85000 index showing - 11000 whereas previously approx 80000 were index. Whereas when I search in google site:abcd.com is it showing approx 27000 index for webpages. No message from google for penalty or warning etc.Please help.
Technical SEO | | wright3350 -
Why google indexed pages are decreasing?
Hi, my website had around 400 pages indexed but from February, i noticed a huge decrease in indexed numbers and it is continually decreasing. can anyone help me to find out the reason. where i can get solution for that? will it effect my web page ranking ?
Technical SEO | | SierraPCB0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Any one worked on a sites.google.com/ website ?
Hi I have a client that has a sites.google.com/ website, Has anyone ever used one ? or had to do SEO on one ? Any help would be very much appreciated Thanks
Technical SEO | | tempowebdesign0 -
Site not ranking in Google but comes up #1 in Yahoo and Bing
Hi everyone, I've been working on SEO for this site for about 2 years and for some reason the site has just tanked in google. However it shows up #1 in yahoo and bing for the same search. http://www.nfsmn.com Phrase: "commercial foundation repair mn" If anyone can shed some light on the issue I would really appreciate it. They do have a sister-site: american-waterworks.com that may be causing issues as they link a lot of content to amww but not the other way around. Thanks Eric
Technical SEO | | reynoldsdesign0 -
Rectified the onsite after 3 months. How to tell Google to refresh my site ?
Hi.. We sell Joomla addons here, http://www.joomclub.org/joomla-extensions/ This site was started in september 2011 and its converting very well from other sources, but I need it to rank high and get organic traffic as well. We have over 295 pages and almost alla re index, I guess, but doesnt come anywhere in the SERPS, so I thought of investigating it and found that, all the product pages h1 tag was messed up and gone. Just fixed it right now. So, how do I tell Google to refresh and recrawl my pages and index ?
Technical SEO | | qubesys0 -
Site description on Google has changed to a very outdated description
When I googled my top keyword today, my site on google showed a description of my site from YEARS ago. It is completely irrevelant and misleading to visitors. Why would this have happened and is there anything I can do about it? Thanks!!! Betsy
Technical SEO | | bhsiao0