Severe rank drop due to overwritten robots.txt
-
Hi,
Last week we made a change to drupal core for an update to our website. We accidentally overwrote our good robots.txt that blocked hundreds of pages with the default drupal robots.txt. Several hours after that happened (and we didn't catch the mistake) our rankings dropped from mostly first, second place in Google organic to bottom and mid first page.
Basically I believe we flooded the index with very low quality pages at once and threw a red flag and we got de-ranked.
We have since fixed the robots.txt and have been re-crawled but have not seen a return in rank.
Would this be a safe assumption of what happened? I haven't seen any other sites getting hit in the retail vertical yet in regards to any Panda 2.3 type of update.
Will we see a return in our results anytime soon?
Thanks,
Justin
-
Your present approach is correct. Ensure all these pages are tagged as noindex for now. Remove the block from robots.txt and let Google and Bing crawl these pages.
I would suggest waiting until you are confident all the pages were removed from Google's index, then check Yahoo and Bing. If you decide that robots.txt is the best decision for your company, then you can replace the disallows after confirming your site is no longer affected by these pages.
I would also suggest that, going forward, you ensure any new pages on your site that you do not wish to index always include the appropriate meta tag. If this issue happens again then you will have a layer of protection in place.
-
We're pretty confident thus far that we have flooded the index with about 15,000 low rank URLs all at once. This has happened once in the past a few years back but we didn't flood their index, they were newer pages at the time in which were low quality and could have been seen as spam since there was no real content but adsense so we removed them with a disallow in robots.
We are adding the meta no-index to all of these pages. You're saying we should remove the disallow in robots.txt so googlebot can crawl these pages and see the meta-noindex?
We are a very large site and we're crawled often. We're a PR7 site and MOZrank DA is 79/100. We have dropped from 82.
We're hoping these URLs will be removed quickly, I don't think there is a way of removing 15k links in GWMT without setting off flags also.
-
There is no easy answer for how long it will take.
If your theory about the ranking drop being caused by these pages being added is correct, then as these pages are removed from Google's index, your site should improve. The timeline depends on the size of your site, your site's DA, the PA and links for these particular pages, etc.
If it was my site I would mark the calendar for August 1st to review the issue. I would check all the pages which were mistakenly indexed to be certain they were removed. After, I would check the rankings.
-
Hi Ryan,
Thanks for your response. Actually you are correct. We have found some of the pages that should be no follows still indexed. We are now going to use the noindex, follow meta tags on these pages because we can't afford to have theses pages indexed as they are particularly for clients/users only and are very low quality and have been flagged before.
Now, how long until we see our rank move back? Thats the real big question.
Thanks so much for your help.
Justin
-
That's a great answer Ryan... I wonder, just out of curiosity, if it wouldn't hurt to look at the cached version of the pages if they're indexed? I'd be curious to know if the date they were cached is right near when the robots.txt was changed? I know it wouldn't alter his course of action, but might add further confirmation that this caused the problem?
-
Justin,
Based on the information you provided it's not possible to determine if the robots.txt file was part of the issue. You need to investigate the matter further. Using Google enter a query in an attempt to find some of the previously blocked content. For example, let's assume your site is about SEO but you shared a blog article about your movie review of the latest Harry Potter movie. You may have used robots.txt to block that article because it is unrelated to your site's focus. Perform a search for "Harry Potter insite:mysite.com" replacing mysite.com with your main web address. If the search returns your article, then you know the content was indexed. Try this approach for several of your previously blocked areas of the website.
If you find this content in SERPs, then you need to have it removed. The best thing to do is add the "noindex, follow" tags to all these pages, then remove the block from your robots.txt file.
The problem is that with the block in place on your robots.txt file, Google cannot see the new meta tag and does not know to remove the content from it's index.
One last item to mention. Google does have a URL removal tool but that would not be appropriate in this instance. That tool is designed to remove a page which causes direct damage by being in the index. Trade secrets or other confidential information can be removed with this tool.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Drop in performance
Some pages in Search Console have troubles in indexing. Although Google Index says url is on google and indexed , Live test shows something different( image is attached ). My performance has badly been affected(image attached). Anybody knows what the problem is? R5iCT20
Technical SEO | | ehsanamel0 -
Manual action due to hack
We have had some issues with one of our websites getting hacked. The first time it happened, we noticed it the next morning and cleaned it up before Google even realised. However, the same thing happened again over the weekend, and I came into the office to an email from Google: Google has detected that your site has been hacked by a third party who created malicious content on some of your pages. This critical issue utilizes your site’s reputation to show potential visitors unexpected or harmful content on your site or in search results. It also lowers the quality of results for Google Search users. Therefore, we have applied a manual action to your site that will warn users of hacked content when your site appears in search results. To remove this warning, clean up the hacked content, and file a reconsideration request. After we determine that your site no longer has hacked content, we will remove this manual action. _Following are one or more example URLs where we found pages that have been compromised. Review them to gain a better sense of where this hacked content appears. The list is not exhaustive. _ We have again cleaned up the website, however, my problem is that even though we have received this email, I cannot find any evidence of the manual action having actually been applied. I.e. it doesn't show in the Search Console and I am also not getting a warning in the search results when searching for our own website or clicking on the result for our website. That means I cannot submit a reconsideration request - however I am not sure at all there was actually a manual action applied at all based on my test searches. Has anyone here experienced the same issue? What do you suggest doing in this case? Thank you very much in advance for any ideas.
Technical SEO | | ViviCa10 -
Robots.txt on subdomains
Hi guys! I keep reading conflicting information on this and it's left me a little unsure. Am I right in thinking that a website with a subdomain of shop.sitetitle.com will share the same robots.txt file as the root domain?
Technical SEO | | Whittie0 -
Ranking Drop
hey guys, I know there is always someone here asking this questions, but I have done my fair share of investigation and have not come up with a cause. my ranking dropped from page one to page 2 or 3 last sunday in time span of 4 hours. After that I read that google was updating panda with 4.1, but don't know why I got hit exactly. my competition is all clear and some have even gained rankings, and mine dropped even though my domain and page authority increased as well. any ideas would help. I already checked all the obvious ones (such as spam, links, content, etc)
Technical SEO | | s-s1 -
Will an XML sitemap override a robots.txt
I have a client that has a robots.txt file that is blocking an entire subdomain, entirely by accident. Their original solution, not realizing the robots.txt error, was to submit an xml sitemap to get their pages indexed. I did not think this tactic would work, as the robots.txt would take precedent over the xmls sitemap. But it worked... I have no explanation as to how or why. Does anyone have an answer to this? or any experience with a website that has had a clear Disallow: / for months , that somehow has pages in the index?
Technical SEO | | KCBackofen0 -
How many times robots.txt gets visited by crawlers, especially Google?
Hi, Do you know if there's any way to track how often robots.txt file has been crawled? I know we can check when is the latest downloaded from webmaster tool, but I actually want to know if they download every time crawlers visit any page on the site (e.g. hundreds of thousands of times every day), or less. thanks...
Technical SEO | | linklater0 -
OK to block /js/ folder using robots.txt?
I know Matt Cutts suggestions we allow bots to crawl css and javascript folders (http://www.youtube.com/watch?v=PNEipHjsEPU) But what if you have lots and lots of JS and you dont want to waste precious crawl resources? Also, as we update and improve the javascript on our site, we iterate the version number ?v=1.1... 1.2... 1.3... etc. And the legacy versions show up in Google Webmaster Tools as 404s. For example: http://www.discoverafrica.com/js/global_functions.js?v=1.1
Technical SEO | | AndreVanKets
http://www.discoverafrica.com/js/jquery.cookie.js?v=1.1
http://www.discoverafrica.com/js/global.js?v=1.2
http://www.discoverafrica.com/js/jquery.validate.min.js?v=1.1
http://www.discoverafrica.com/js/json2.js?v=1.1 Wouldn't it just be easier to prevent Googlebot from crawling the js folder altogether? Isn't that what robots.txt was made for? Just to be clear - we are NOT doing any sneaky redirects or other dodgy javascript hacks. We're just trying to power our content and UX elegantly with javascript. What do you guys say: Obey Matt? Or run the javascript gauntlet?0 -
Using robots.txt to deal with duplicate content
I have 2 sites with duplicate content issues. One is a wordpress blog. The other is a store (Pinnacle Cart). I cannot edit the canonical tag on either site. In this case, should I use robots.txt to eliminate the duplicate content?
Technical SEO | | bhsiao0