Can I rely on just robots.txt
-
We have a test version of a clients web site on a separate server before it goes onto the live server.
Some code from the test site has some how managed to get Google to index the test site which isn't great!
Would simply adding a robots text file to the root of test simply blocking all be good enough or will i have to put the meta tags for no index and no follow etc on all pages on the test site also?
-
You can do the inbound link check right here using SEOMoz's Open Site Explorer tool to check for links to the dev site, whether it's in a subdomain, subfolder or a separate site.
Good luck!
Paul
-
thats a great help cheers
wheres the best place to do an inbound link check?
-
You're actually up against a bit of a sticky wicket here, SS. You do need the no-index, no-follow meta tags on each page as Irving mentions.
HOWEVER! If you also add a robots.txt directive not to index the site, the search crawlers will not crawl your pages and therefore will never see the noindex metatag to know to remove the incorrectly-indexed pages from their index.
My recommendation is for a belt & suspenders approach.
- implement the meta no-index, no-follow tags throughout the dev site, but do NOT immediately implement the robots.txt exclusion. Wait a day or two until the pages get recrawled and the bots discover the noindex metatags
- Use the Remove URL tools in both Google and Bing Webmaster Tools to request removal of all the dev pages you are aware have been indexed.
- Then add the exclusion directive to the robots.txt file to keep the crawlers out from then on (leaving the no-index, no-follow tags in place).
- check back in the SERPS periodically to check that no other dev pages have been indexed. IF they have, do another manual removal request.
Does that make sense?
Paul
P.S. As a last measure, run an inbound links check on the dev pages that got indexed to find out which external pages are linking to the dev pages. Get those inbound links removed ASAP so the search engines aren't getting any signals to index the dev site. Last option would be to simply password-protect the directory the dev site is in. A little less convenient, but guaranteed to keep the crawlers out.
-
cheers, i thought as much
-
You cannot rely on robots.txt alone, you need to add the meta noindex tag to the pages as well to ensure that they will not get indexed.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt & meta noindex--site still shows up on Google Search
I have set up my robots.txt like this: User-agent: *
Technical SEO | | RoxBrock
Disallow: / and I have this meta tag in my on a Wordpress site, set up with SEO Yoast name="robots" content="noindex,follow"/> I did "Fetch as Google" on my Google Search Console My website is still showing up in the search results and it says this: "A description for this result is not available because of this site's robots.txt" This site has not shown up for years and now it is ranking above my site that I want to rank for this keyword. How do I get Google to ignore this site? This seems really weird and I'm confused how a site with little content, that has not been updated for years can rank higher than a site that is constantly updated and improved.1 -
Robots.txt
Hi All Having a robots.txt looking like the below will this stop Google crawling the site User-agent: *
Technical SEO | | internetsalesdrive0 -
Adding multi-language sitemaps to robots.txt
I am working on a revamped multi-language site that has moved to Magento. Each language runs off the core coding so there are no sub-directories per language. The developer has created sitemaps which have been uploaded to their respective GWT accounts. They have placed the sitemaps in new directories such as: /sitemap/uk/sitemap.xml /sitemap/de/sitemap.xml I want to add the sitemaps to the robots.txt but can't figure out how to do it. Also should they have placed the sitemaps in a single location with the file identifying each language: /sitemap/uk-sitemap.xml /sitemap/de-sitemap.xml What is the cleanest way of handling these sitemaps and can/should I get them on robots.txt?
Technical SEO | | MickEdwards0 -
Similar Websites, Same C Block: Can I Get a Penalty?
One of my website has been heavily hit by Google's entire zoo so I decided to phase it out while building a new one. Old website: www.thewebhostinghero.com
Technical SEO | | sbrault74
New website: www.webhostinghero.com Now the thing is that both websites are obviously similar since I kept the branding. They also both have content about the same topics. No content has been copied or spinned or whatever though. Everything's original on both websites. There were only 3 parts of both websites that were too similar in terms of functionalities so I "noindexed" it on the old website. Now it seems that Google doesn't want you to have multiple websites for the same business just for the sake of occupying more space in the search results. This can especially be detected by the websites' C block. I am not sure if this is myth or fact though. So do you think I'm in a problematic situation with this scenario? It's getting ridiculous all you have to watch for when building a website, I'm afraid to touch my keyboard in fear my websites will get penalized! Sorry for my english btw.0 -
Can a Novice Fix Parallelize Issues?
I was working yesterday on making my WP site quicker (sellingwarnerrobins.com) and after updating the htaccess file to solve some "Leverage Browser Caching" issues I re-ran a scan on Pingdom Tools and am now getting a zero for "Parallelize downloads across hostnames" with a list of 34 items to fix. I did some web searches and when the articles started talking about cnames, subdomains, and hostname distribution it went beyond my capabilities. Are these Parallelize "issues" something a novice like myself can easily fix? If so, how?
Technical SEO | | Anita_Clark0 -
Can someone tell me how in the heck this site is ranking?
Just saw a huge update on the Serps for "SEO Company" and "Seo Firm " and this site has tons of big Google no's no's going to it. www.seospecialists.co it has links coming from Porn and Gambling sites. It is heavily over optimized for both keywords. I won't cry the sky is falling[SEO is broke] but I am curious to know why Google thinks they are worthy to rank that high.
Technical SEO | | MattieMac0 -
Can I "Run Macros" on my own?
I talked to the SEO company I am using and trying to get an understanding of what it is they are doing for me. They told me that one of the most important things they are doing is running macros. Is this something I could learn to do myself? What does it mean? How do I do it? How long does it take?? I have recently been educating myself on SEO and coded my website with metadata titles and descriptions. Is running macros something I can do on my own too? I guess I'd also just like to know what it is.
Technical SEO | | CapitolShine0 -
Can spammy links affect indexing?
Meaning, if you have a lot of bad quality links (directories, blog comments) that are giving great rankings for some terms (on a homepage of a site), could the low quality of these links negatively affect the crawling frequency of interior pages or perhaps even give interior pages a ranking penalty?
Technical SEO | | qlkasdjfw0