Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
I accidentally blocked Google with Robots.txt. What next?
-
Last week I uploaded my site and forgot to remove the robots.txt file with this text:
User-agent: * Disallow: /
I dropped from page 11 on my main keywords to past page 50.
I caught it 2-3 days later and have now fixed it. I re-imported my site map with Webmaster Tools and I also did a Fetch as Google through Webmaster Tools. I tweeted out my URL to hopefully get Google to crawl it faster too.
Webmaster Tools no longer says that the site is experiencing outages, but when I look at my blocked URLs it still says 249 are blocked. That's actually gone up since I made the fix.
In the Google search results, it still no longer has my page title and the description still says "A description for this result is not available because of this site's robots.txt – learn more."
How will this affect me long-term? When will I recover my rankings? Is there anything else I can do?
Thanks for your input!
-
Excellent. Good luck on your climb to page 1.
-
RESULTS: So Google re-crawled my site at some point since I fetched it through Webmaster Tools. My page in the search results now has my title and meta description back and I bounced back to page 15. Page 15 isn't too abnormal, as before I accidentally blocked myself I had been bouncing around from page 11 to 18.
Thanks for our input and hopefully this will help someone in the future.
-
-
Thanks for the input. Where do I fetch my URL within GWT? Any ideas on how quickly I'll regain my rankings? Will I have to make up some ground with my SEO work, or will Google just place me back where I was once they re-crawl my site?
-
I agree with Sebastian's follow up suggestions. It's way quicker to de-index yourself than it is to get those rankings back. Even one the robots is "cleared" it can take time to get back to where you were.
-
Hi,
Sorry to hear you're having issues, we've all accidently blocked off our site at one point or another I'm sure.
You've done pretty much everything you can do to get your site re-indexed. The only other thing I could suggest is to do a fetch URL within GWT, then take it a step further by submitting your URL and all linked pages.
If you haven't recovered within a week or so, then it may be an idea to do a reconsideration request.
Good luck!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt allows wp-admin/admin-ajax.php
Hello, Mozzers!
Technical SEO | | AndyKubrin
I noticed something peculiar in the robots.txt used by one of my clients: Allow: /wp-admin/admin-ajax.php What would be the purpose of allowing a search engine to crawl this file?
Is it OK? Should I do something about it?
Everything else on /wp-admin/ is disallowed.
Thanks in advance for your help.
-AK:2 -
Is there a limit to how many URLs you can put in a robots.txt file?
We have a site that has way too many urls caused by our crawlable faceted navigation. We are trying to purge 90% of our urls from the indexes. We put no index tags on the url combinations that we do no want indexed anymore, but it is taking google way too long to find the no index tags. Meanwhile we are getting hit with excessive url warnings and have been it by Panda. Would it help speed the process of purging urls if we added the urls to the robots.txt file? Could this cause any issues for us? Could it have the opposite effect and block the crawler from finding the urls, but not purge them from the index? The list could be in excess of 100MM urls.
Technical SEO | | kcb81780 -
Blocked jquery in Robots.txt, Any SEO impact?
I've heard that Google is now indexing links and stuff available in javascript and jquery. My webmastertools is showing that some links are blocked in robots.txt of jquery. Sorry I'm not a developer or designer. I want to know is there any impact of this on my SEO? and also how can I unblock it for the robots? Check this screenshot: http://i.imgur.com/3VDWikC.png
Technical SEO | | hammadrafique0 -
How do I get out of google bomb?
Hi all, I have a website named bijouxroom.com; and I was in the 7th page for the search term takı in google; and 2nd page for online takı. Now, I see that in one day my results seem to be on the 13th and 10th page in google respectively. I made too much anchor text for takı and online takı. What shall I do to gain my positions back? Thanks in advance. Regards,
Technical SEO | | ozererim0 -
Robots.txt to disallow /index.php/ path
Hi SEOmoz, I have a problem with my Joomla site (yeah - me too!). I get a large amount of /index.php/ urls despite using a program to handle these issues. The URLs cause indexation errors with google (404). Now, I fixed this issue once before, but the problem persist. So I thought, instead of wasting more time, couldnt I just disallow all paths containing /index.php/ ?. I don't use that extension, but would it cause me any problems from an SEO perspective? How do I disallow all index.php's? Is it a simple: Disallow: /index.php/
Technical SEO | | Mikkehl0 -
Rel = prev next AND canonical?
I have product category pages that correctly have the prev next but the moz crawl is giving me duplicate content errors. I would not think I also need to have canonical - but do I ?
Technical SEO | | JohnBerger0 -
Allow or Disallow First in Robots.txt
If I want to override a Disallow directive in robots.txt with an Allow command, do I have the Allow command before or after the Disallow command? example: Allow: /models/ford///page* Disallow: /models////page
Technical SEO | | irvingw0 -
Robots.txt and canonical tag
In the SEOmoz post - http://www.seomoz.org/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts, it's being said - If you have a robots.txt disallow in place for a page, the canonical tag will never be seen. Does it so happen that if a page is disallowed by robots.txt, spiders DO NOT read the html code ?
Technical SEO | | seoug_20050