Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Staging & Development areas should be not indexable (i.e. no followed/no index in meta robots etc)
-
Hi
I take it if theres a staging or development area on a subdomain for a site, who's content is hence usually duplicate then this should not be indexable i.e. (no-indexed & nofollowed in metarobots) ? In order to prevent dupe content probs as well as non project related people seeing work in progress or finding accidentally in search engine listings ?
Also if theres no such info in meta robots is there any other way it may have been made non-indexable, or at least dupe content prob removed by canonicalising the page to the equivalent page on the live site ?
In the case in question i am finding it listed in serps when i search for the staging/dev area url, so i presume this needs urgent attention ?
Cheers
Dan
-
- use robots.txt vs the meta tags - robots.txt is preferred.
-
I'm about to issue these instructions would appreciate it if you could quickly confirm covers your advice correctly and nothing missing:
1) Setup a completely different GWT account unrelated to the main site, so that there is a new GWT account specific to the staging subdomain
2) Add a robots.txt on the staging area subdomain site that disallows all pages and all crawlers OR use the noindex meta tag on all pages. Its obviously very important when you update the main site it DOES NOTinclude or push out these files too (since that would result in main site or pages being de-indexed)3) Request removal of all pages in GWT. Leave the form blank for the page to be removed since this will remove the entire site4) After about 1 month (or you see that the pages are all out of the serps), and google has spidered and seen the robots.txt, then put up a password on the entire staging site.Note:For brand new sites staging areas that don't yet exist or exist but are new and not yet showing up in the index then simply add a password for human access to prevent the above process being required in the future. -
Thanks for clarifying that CleverPHD & thanks again for all your help and great advice
Have a great weekend !!
All Best
Dan
-
That is a completely valid question. This is why setting up the separate GWT account for the dev.domain.ext vs www.domain.ext. When you submit the removal request it will only be in the dev.domain.ext account.
The only thing you want to watch is that if you setup robots.txt in your dev environment you want to make sure that it does not get pushed out to your production server. That is the only gotcha as I see it.
-
thanks !
as er my last question theres no risk of accidentally taking out the main site as part of this process ?
cheers
dan
-
Thanks so much for that great advice
just a bit worried about accidentally getting main site removed by accident, i take it so long as its a brand new GWT account for that specific subdomain then this cant happen ?
Cheers
Dan
-
Here is a Google documentation on how to use the GWT to remove a page/directory/site and then the interaction with robots.txt
http://googlewebmastercentral.blogspot.com/2010/03/url-removal-explained-part-i-urls.html
"In order for a directory or site-wide removal to be successful, the directory or site must be disallowed in the site's robots.txt file."
Side story. I once had a subdomain that I needed to take out, but I could not modify the robots.txt file properly (long story). So, we used the GWT tool and the meta noindex tag. It still worked, but I think that would only be a backup approach to the one suggested by the documentation.
-
Usually, this would be true that you would need to use the noindex tag to get things out of the SERPs and need to leave the robots.txt "open" to the crawlers. But when you are working with the remove URL tool in GWT,they rx that you then put the site in robots.txt to keep them out of it
The removal tool in GWT takes care of Google taking the URLs out and then the robots.txt keeps the bots from coming back. Just a different sequence than if you were to use the noindex meta.
-
If you create the GWT account for the dev site and you submit for removal, GWT requires that you either a) have the site blocked in robots.tx or have a noindex meta tag on the pages. Otherwise they will just crawl you again later and you are back in the index. See my post from earlier.
-
Short answer - no dev sites should be public to start with to anyone (let along Google et alia). The simplest way is to put an htacess password on all your dev sites. You can do a password per person in your company, or just one general one that everyone on the dev team shares.
If you do have a dev site in the Serps, the simplest way to get it out is to setup a GWT account for that subdomain and then e.g. dev.yourdomain.ext and then go into that account and request removal of all pages. You just leave the form blank for the page to be removed and it takes out the whole site. You then need a robots.txt on dev.yourdomain.ext (different from the www. version) that disallows all pages all crawlers - that or use the noindex meta tag on all page.
After about 1 month (or you see that the pages are all out of the serps), then I would put up a password on that entire site and be done with it. Key point, dont put the password up until you let google try to spider and it sees the robots etc.
Also, if you have any other staging sites that are out there like test.yourdomain.ext etc. If they are not indexed, go ahead and put the password up on them to limit your exposure.
Public dev sites are the fastest way to get duplicate content into the index and to jack with the ranking of your current site. It is key that all of them are locked down. If one of your developers say it is no big deal, call BS, it is a big deal and it can cause a big mess.
-
Hey Dan,
In this case, I would not exclude crawling via robots.txt. Perhaps later after you have verified the URLs are out of the index.
Just because Google can't crawl a page, doesn't mean they won't keep it in the index. Excluding crawling will not get a page out of the index.
Add the NOINDEX, FOLLOW tag you listed above and give it some time.
Use GWT if it's urgent or the information is sensitive.
-
Thanks Anthony,
The staging area already exists and is indexable as far as i can tell
So i need to tell developers to exclude crawling via robots.txt, add a no-index tag to head of each page but keep it followed so still crawlable i.e. within the Head section of every page on the dev area
OR alternatively just remove urls from GWT)
If excluding crawling via robots.txt file then why do you need to add a noindex tag to each page too, surely the robots.txt deals with this situation ?
cheers
dan
-
Ideally when creating a new staging area, you'd want to exclude crawling via robots.txt.
Add the NoIndex tag to the head of your pages to get them removed from the SERPs. Make sure the page is still crawlable though, as if you exclude it in robots.txt first and then NoIndex it, Google won't be able to see the new NoIndex tag.
If there are not a lot of pages to remove, you can request page removal within Google Webmaster Tools.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Indexed pages
Just started a site audit and trying to determine the number of pages on a client site and whether there are more pages being indexed than actually exist. I've used four tools and got four very different answers... Google Search Console: 237 indexed pages Google search using site command: 468 results MOZ site crawl: 1013 unique URLs Screaming Frog: 183 page titles, 187 URIs (note this is a free licence, but should cut off at 500) Can anyone shed any light on why they differ so much? And where lies the truth?
Technical SEO | | muzzmoz1 -
URL Structure On Site - Currently it's domain/product-name NOT domain/category/product name is this bad?
I have a eCommerce site and the site structure is domain/product-name rather than domain/product-category/product-name Do you think this will have a negative impact SEO Wise? I have seen that some of my individual product pages do get better rankings than my categories.
Technical SEO | | the-gate-films0 -
No Index PDFs
Our products have about 4 PDFs a piece, which really inflates our indexed pages. I was wondering if I could add REL=No Index to the PDF's URL? All of the files are on a file server, so they are embedded with links on our product pages. I know I could add a No Follow attribute, but I was wondering if any one knew if the No Index would work the same or if that is even possible. Thanks!
Technical SEO | | MonicaOConnor0 -
Should I put meta descriptions on pages that are not indexed?
I have multiple pages that I do not want to be indexed (and they are currently not indexed, so that's great). They don't have meta descriptions on them and I'm wondering if it's worth my time to go in and insert them, since they should hypothetically never be shown. Does anyone have any experience with this? Thanks! The reason this is a question is because one member of our team was linking to this page through Facebook to send people to it and noticed random text on the page being pulled in as the description.
Technical SEO | | Viewpoints0 -
Can you have a /sitemap.xml and /sitemap.html on the same site?
Thanks in advance for any responses; we really appreciate the expertise of the SEOmoz community! My question: Since the file extensions are different, can a site have both a /sitemap.xml and /sitemap.html both siting at the root domain? For example, we've already put the html sitemap in place here: https://www.pioneermilitaryloans.com/sitemap Now, we're considering adding an XML sitemap. I know standard practice is to load it at the root (www.example.com/sitemap.xml), but am wondering if this will cause conflicts. I've been unable to find this topic addressed anywhere, or any real-life examples of sites currently doing this. What do you think?
Technical SEO | | PioneerServices0 -
SEO plugin by Yoast messing up my title/meta description
Hey guys, I'm having some issues with my wordpress blog, and I believe SEO plugin by Yoast could be the one causing it. I have set a title for my wordpress blog, and a tagline. This was set in dashboard > settings > general Under "titles and metas" > home in the plugin it says, title: %%sitename%% %%page%% %%sep%% %%sitedesc%%, and meta description is blank. The reports on seomoz says my title is title+meta description - making it to long (to many characters). What could be the issue here? Thanks in advance!
Technical SEO | | danielpett0 -
Home Page .index.htm and .com Duplicate Page Content/Title
I have been whittling away at the duplicate content on my clients' sites, thanks to SEOmoz's pro report, and have been getting push back from the account manager at register.com (the site was built here and the owner doesn't want to move it). He says these are the exact same page and he can't access one to redirect to the other. Any suggestions? The SEOmoz report says there is duplicate content on both these urls: Durango Mountain Biking | Durango Mountain Resort - Cascade Village http://www.cascadevillagehotel.com/index.htm Durango Mountain Biking | Durango Mountain Resort - Cascade Village http://www.cascadevillagehotel.com/ Your help is greatly appreciated! Sheryl
Technical SEO | | TOMMarketingLtd.0 -
Adding 'NoIndex Meta' to Prestashop Module & Search pages.
Hi Looking for a fix for the PrestaShop platform Look for the definitive answer on how to best stop the indexing of PrestaShop modules such as "send to a friend", "Best Sellers" and site search pages. We want to be able to add a meta noindex ()to pages ending in: /search?tag=ball&p=15 or /modules/sendtoafriend/sendtoafriend-form.php We already have in the robot text: Disallow: /search.php
Technical SEO | | reallyitsme
Disallow: /modules/ (Google seems to ignore these) But as a further tool we would like to incude the noindex to all these pages too to stop duplicated pages. I assume this needs to be in either the head.tpl or the .php file of each PrestaShop module.? Or is there a general site wide code fix to put in the metadata to apply' Noindex Meta' to certain files. Current meta code here: Please reply with where to add code and what the code should be. Thanks in advance.0