Client accidently blocked entire site with robots.txt for a week
-
Our client was having a design firm do some website development work for them. The work was done on a staging server that was blocked with a robots.txt to prevent duplicate content issues.
Unfortunately, when the design firm made the changes live, they also moved over the robots.txt file, which blocked the good, live site from search for a full week. We saw the error (!) as soon as the latest crawl report came in.
The error has been corrected, but...
Does anyone have any experience with a snafu like this? Any idea how long it will take for the damage to be reversed and the site to get back in the good graces of the search engines? Are there any steps we should take in the meantime that would help to rectify the situation more quickly?
Thanks for all of your help.
-
Here's a YouMoz post that was promoted to the main blog about what someone else did in this situation that may help.
http://www.seomoz.org/blog/accidental-noindexation-recovery-strategy-amp-results
A couple of preventative steps would have been to make the robots.txt file on the live site read-only so it couldn't have been as easily overwritten, and to use a free service like Pole Position's Code Monitor (https://polepositionweb.com/roi/codemonitor/index.php) to monitor the contents of your robots.txt file once a day and email you if there are changes. I'd also monitor your dev robots.txt, just to make sure the live site robots.txt doesn't get copied over to dev one day and your dev site gets indexed (I've had that happen!).
-
I can't say anything about robots.txt
.... but one of my competitors tossed up a new design with nofollow, noindex tags on every page and their site immediately tanked out of Google.
... it took them a couple weeks to figure it out but once they yanked that line of code they were back at topSERPs within 48 hours.
... this was a relatively strong site and I would expect that type of site recovers faster than a PR2 site with little connectivity.
-
Hi, have you tried logging in to Google Webmaster tools and fetching the URL as googlebot? This helped me recently with a couple of sites that I had blocked with robots.txt. They were up-to-date in SERP's within 2 days.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is there a limit to how many URLs you can put in a robots.txt file?
We have a site that has way too many urls caused by our crawlable faceted navigation. We are trying to purge 90% of our urls from the indexes. We put no index tags on the url combinations that we do no want indexed anymore, but it is taking google way too long to find the no index tags. Meanwhile we are getting hit with excessive url warnings and have been it by Panda. Would it help speed the process of purging urls if we added the urls to the robots.txt file? Could this cause any issues for us? Could it have the opposite effect and block the crawler from finding the urls, but not purge them from the index? The list could be in excess of 100MM urls.
Technical SEO | | kcb81780 -
What are the negative implications of listing URLs in a sitemap that are then blocked in the robots.txt?
In running a crawl of a client's site I can see several URLs listed in the sitemap that are then blocked in the robots.txt file. Other than perhaps using up crawl budget, are there any other negative implications?
Technical SEO | | richdan0 -
Should I block Map pages with robots.txt?
Hello, I have a website that was started in 1999. On the website I have map pages for each of the offices listed on my site, for which there are about 120. Each of the 120 maps is in a whole separate html page. There is no content in the page other than the map. I know all of the offices love having the map pages so I don't want to remove the pages. So, my question is would these pages with no real content be hurting the rankings of the other pages on our site? Therefore, should I block the pages with my robots.txt? Would I also have to remove these pages (in webmaster tools?) from Google for blocking by robots.txt to really work? I appreciate your feedback, thanks!
Technical SEO | | imaginex0 -
Should we dump the https from a client site?
We inherited a site that has both http and https. No e-commerce or data transfer...just html. Should we dump the https certificate? I think it might be causing issues with indexing and possible duplicate content. The https site has a certificate warning message...not good. The URL is www.charlottemechanical.com
Technical SEO | | theideapeople0 -
How to create site map for large site (ecommerce type) that has 1000's if not 100,000 of pages.
I know this is kind of a newbie question but I am having an amazing amount of trouble creating a sitemap for our site Bestride.com. We just did a complete redesign (look and feel, functionality, the works) and now I am trying to create a site map. Most of the generators I have used "break" after reaching some number of pages. I am at a loss as to how to create the sitemap. Any help would be greatly appreciated! Thanks
Technical SEO | | BestRide0 -
4XX(Client Error)
Hello there Please help! I am getting this kind of error in the whole site. http://www.mileycyrus-online.co.uk/leaked-hannah-montana-the-movie-pictures.html/comments Running on wordpress site. I chagned the template few times.. most of the error ends with a /comments. Infact all my post has the same issue: http://www.mileycyrus-online.co.uk/miley-cyrus-at-golden-globes-ceremony.html/comments http://www.mileycyrus-online.co.uk/miley-cyrus-at-president-obamas-inauguration-concert.html/comments 404 Error.
Technical SEO | | ExpertSolutions0 -
BEST Wordpress Robots.txt Sitemap Practice??
Alright, my question comes directly from this article by SEOmoz http://www.seomoz.org/learn-seo/robotstxt Yes, I have submitted the sitemap to google, bing's webmaster tools and and I want to add the location of our site's sitemaps and does it mean that I erase everything in the robots.txt right now and replace it with? <code>User-agent: * Disallow: Sitemap: http://www.example.com/none-standard-location/sitemap.xml</code> <code>???</code> because Wordpress comes with some default disallows like wp-admin, trackback, plugins. I have also read other questions. but was wondering if this is the correct way to add sitemap on Wordpress Robots.txt http://www.seomoz.org/q/robots-txt-question-2 http://www.seomoz.org/q/quick-robots-txt-check. http://www.seomoz.org/q/xml-sitemap-instruction-in-robots-txt-worth-doing I am using Multisite with Yoast plugin so I have more than one sitemap.xml to submit Do I erase everything in Robots.txt and replace it with how SEOmoz recommended? hmm that sounds not right. User-agent: *
Technical SEO | | joony2008
Disallow:
Disallow: /wp-admin
Disallow: /wp-includes
Disallow: /wp-login.php
Disallow: /wp-content/plugins
Disallow: /wp-content/cache
Disallow: /wp-content/themes
Disallow: /trackback
Disallow: /comments **ERASE EVERYTHING??? and changed it to** <code> <code>
<code>User-agent: *
Disallow: </code> Sitemap: http://www.example.com/sitemap_index.xml</code> <code>``` Sitemap: http://www.example.com/sub/sitemap_index.xml ```</code> <code>?????????</code> ```</code>0 -
Quick robots.txt check
We're working on an SEO update for http://www.gear-zone.co.uk at the moment, and I was wondering if someone could take a quick look at the new robots file (http://gearzone.affinitynewmedia.com/robots.txt) to make sure we haven't missed anything? Thanks
Technical SEO | | neooptic0