Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
Recovering from robots.txt error
-
Hello,
A client of mine is going through a bit of a crisis. A developer (at their end) added Disallow: / to the robots.txt file. Luckily the SEOMoz crawl ran a couple of days after this happened and alerted me to the error. The robots.txt file was quickly updated but the client has found the vast majority of their rankings have gone.
It took a further 5 days for GWMT to file that the robots.txt file had been updated and since then we have "Fetched as Google" and "Submitted URL and linked pages" in GWMT.
In GWMT it is still showing that that vast majority of pages are blocked in the "Blocked URLs" section, although the robots.txt file below it is now ok.
I guess what I want to ask is:
- What else is there that we can do to recover these rankings quickly?
- What time scales can we expect for recovery?
- More importantly has anyone had any experience with this sort of situation and is full recovery normal?
Thanks in advance!
-
Great info Rikki
thats goid news!
-
Hi Antonio,
I would take a look at your entire site using
One of my very favorite tools this tool will crawl your site and tell you if you have no follow's or other issues that would cause Google bot have trouble indexing your site.
Simply put your sites URL in the box presented in the tool you can find in the link here
http://www.feedthebot.com/tools/spider/
Then use link 2
Displays amount of links (internal, external, nofollow, image, etc.) found on webpage.
http://www.feedthebot.com/tools/linkcount/
You can then see if there is a no follow that might be creating a real problem inside of a page using the two URLs you should be a will to get about of this.
Check this much of your site is you possibly can with this as it will show you A lot of information that would be very relevant as to if your site can be crawled correctly or not
This third tool Will show you if your robots.txt file is still blocking all or part of your website the nice thing about this tool is is is built to make her about star text files however if you simply put your URL in the top and hit the upload button it will pull your robots.txt file this is very helpful when making comparisons between changes that have been made or you wish to make
http://www.internetmarketingninjas.com/seo-tools/robots-txt-generator/
Two check out your robot.txt file against what could be something blocking it I think that will
http://moz.com/blog/interactive-guide-to-robots-txt
http://moz.com/learn/seo/robotstxt
http://tools.seobook.com/robots-txt/
http://yoast.com/x-robots-tag-play/
https://developers.google.com/webmasters/control-crawl-index/docs/robots_meta_tag?hl=de
http://www.searchenginejournal.com/x-robots-tag-simple-alternate-robots-txt-meta-tag/67138/
A citation that I hope will help you is the not too noticeable difference between allowing everything and not allowing everything simply having a / after disallow: Will tell Google that you do not want to be showing up in their search engine results
Simply put I have the information below websites by default are set up with
Allow: /
Example Robots.txt Format
Allow indexing of everything
User-agent: *
Disallow:
or
User-agent: *
Allow: /
Disallow indexing of everything
User-agent: *
Disallow: /Disallow indexing of a specific folder
User-agent: *
Disallow: /folder/Please remember there are multiple ways to block a website for instance
PHP-based websites are extremely popular and if you're using a WordPress or agenda for many other
header("X-Robots-Tag: noindex", true);
I want to remind you what Tom Roberts said in the first response about using Twitter I have quoted him here however you can read it at the top of the Page below the first question
The most frequently crawled domain on the web is Twitter. If you could legitimately get your key URLs tweeted, either by yourselves or others, this may encourage the Google crawler to revisit the URLs, and consequently re index them. There won't be any harm SEO wise in sending tweets with your URLs, it's a quick and free method and so may be worth giving it a shot
Hope This Helps,
Thomas
-
Hi Antonio,
Sorry to hear you have had the same problem, due to our clients nature this error by the developer cost them a load of lost revenue.
In answer to your questions:
-
It took 19 days in total to recover
-
We took everyone's advice and implemented them but I am unsure what actually helped. I think working work GWMT is the best thing for it. Make sure you submit for a re-crawl as soon as possible and see what is still blocked
I know how scary the situation is but things will go back to normal. Its just a matter of playing the waiting game really, sorry I couldn't be of more help.
Rikki
-
-
Hi Rikki,
I know it's been some time since your post, however I just found it because a couple of weeks ago my developer did exactly the same.
It's been 2 weeks now and our traffic is still divided by 4 compared with what it used to be. My questions are:
1/ How long it finally took you to completely recover your previous traffic levels (if you finally did)
2/ Did you apply any of the advices from other bloggers? What would you recommend to do from your experience?
Thanks in advance. I am really worried at this moment, since we've got a peak campaign coming on very soon.
Regards,
Antonio (Citricamente)
-
Hi Rikki,
I really want to say great job though with those numbers. It's always good to see somebody pulling positive ROI. Good work!If I may ask what type of development do specialize in if you have a specialty?
My reason for asking is there are some excellent hosts that will allow you to run a staging server that changes everything like robots.txt back to follow and index when you hit the production button. Other hosts have similar methods.
In fact, that might be an idea that's worth a little bit of money. A nice WordPress plug-in that gives you a constant reminder here in the development phase and does the swap then deletes itself?
Or use a managed WordPress host if it's WordPress.
You can do so many cool things would git these days.
I am extremely happy you have found out there's nothing to worry about if it is simply the tags you will have your rank back before you know it.you can also use Webmaster tools on the manual setting and put it to Max I have done it on test sites, and the site was indexed just as well I would simply make sure I had a reminder telling me to return it to normal after.
You should set the rel="canonical as well/
Glad I was able to help,
Thomas
-
Hi guys,
Thanks very much for the responses. I guess my gut feeling was right that everything would come back to normal but just needed some reassurance.
I have made real progress with this client going from an online brought in revenue of £15k per month at the start of the year to £105k last month but it is all phone based so at the moment his call centre is like a ghost town - its a shame that can happen when a developer is trying to block his own dev sub domain and ends up blocking the whole thing. Just hope it doesn't take too long.
We will certainly try the social media route to see if that speeds things along.
-
please look and see that I updated my response I did I copied from a dictation software's writing pad and only copied a part of it when I meant to copy all of it
please read and let me know if I can be of help
sincerely,
Thomas
-
Please forgive my 1st comment I the button too early and use the dictation software so I save it to one page then paste to another I am sincerely sorry I got this part on there without the entire thing.
Send me the domain either privately if you can or through this chat I would be more than happy to look into it for you. I can tell you I have made the no follow no index mistake myself showing a intern something on our own site and talk about it here below.
However if you are still getting problems you may want to download
screaming frog SEO spider
it only will check for 500 links for free however it gives you invaluable insight
It is a download and works on Mac, Windows and Linux
http://www.screamingfrog.co.uk/seo-spider/
if you want to try something web-based
http://www.internetmarketingninjas.com/tools/
http://www.internetmarketingninjas.com/broken-links-tool/
http://www.internetmarketingninjas.com/seo-tools/robots-txt-generator/
http://www.internetmarketingninjas.com/seo-tools/google-sitemap-generator/
I would also not hesitate to use their DNS tool to check that everything there is okay
Another tool or tools I would strongly recommend and you can access for free are the excellent Internet marketing ninjas
The words used in the metadata tags, in body text and in anchor text in external and internal links all play important roles in on page search engine optimization (SEO). The On-Page Optimization Analysis Free SEO Tool lets you quickly see the important SEO content on your webpage URL the same way a search engine spider views your data. This free SEO onpage optimization tool is multiple onpage SEO tools in one, helpful for reviewing the following onpage optimization information in the source code on the page:
- Metadata tool: Displays text in title tags and meta elements
- Keyword density tool: Reveals onpage SEO keyword statistics for linked and unlinked content
- Keyword optimization tool: Analyzes on page optimization by showing the number of words used in the content, including anchor text of internal and external links
- Link Accounting tool: Displays the number and types of links used
- Header check tool: Shows HTTP Status Response codes for links
- Source code tool: Provides quick access to on-page HTML source code
if you are talking about just the index and no follow
I can now happily say I have done this identical thing.
I have done the exact same thing. I can tell you I was showing somebody how to use the WordPress SEO plug-in when I got distracted and simply did not change the settings back to follow and index. So approximately 2 to 3 days later I noticed a huge loss in ranking year for the company brand name.
(Luckily this was mine not a clients)
It took approximately two days after I changed the settings back to normal follow and index them submitted my entire website to Google's Webmaster tools even clicking yes when asked the index all large change
before I knew it all the rankings had returned back to normal literally the keywords I was tracking returned within the normal fluctuation I see as they were in many cases sometimes better & sometimes little bit worse what I had feared they never would come back at all.Sincerely,
Thomas
Believe me when I say I was extremely thankful for this and don't see why you will not get the same results with your site.
I hope this is a simple a mistake of just that one problem like mine that's the only thing I can give you a testimony of. I would say you have nothing to worry about. But remember to tell Google Webmaster tools I also did tell Bing but that's up to you
-
Should be as quick as google re-crawls the robots.txt.
Best thing you can do is add a couple of links to sites that are crawled daily, to encourage google to visit your clients site as soon as possible
Could be:
- newspaper sites - comments
- and the like
-
Hey there
I've seen this before and in almost all cases the rankings were returned to their previous state, give or take maybe 1 or 2 places (which would be normal SERP flux).
Unfortunately, I've found that this can often take weeks and there's no real sure-fire way of getting Google to update it quicker. Theoretically, to speed things up you want to get the crawler revisiting the URLs more and more often. Fresh backlinks would do this, but obviously you can't game that sort of thing for web spam reasons. You could also try pinging devices, such as GooglePing, but I'm not convinced by their effectiveness.
The most frequently crawled domain on the web is Twitter. If you could legitimately get your key URLs tweeted, either by yourselves or others, this may encourage the Google crawler to revisit the URLs, and consequently reindex them. There won't be any harm SEO wise in sending tweets with your URLs, it's a quick and free method and so may be worth giving it a shot.
Hope this helps you - I've often found you can't control these things but hopefully some of these theories might work. In the long-run, however, the rankings will return and so for normal SEO purposes, create content and links as per usual.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
No index detected in robots meta tag GSC issue_Help Please
Hi Everyone, We just did a site migration ( URL structure change, site redesign, CMS change). During migration, dev team messed up badly on a few things including SEO. The old site had pages canonicalized and self canonicalized <> New site doesn't have anything (CMS dev error) so we are working retroactively to add canonicalization mechanism The legacy site had URL’s ending with a trailing slash “/” <> new site got redirected to Set of url’s without “/” New site action : All robots are allowed: A new sitemap is submitted to google search console So here is my problem (it been a long 24hr night for me 🙂 ) 1. Now when I look at GSC homepage URL it says that old page is self canonicalized and currently in index (old page with a trailing slash at the end of URL). 2. When I try to perform a live URL test, I get the message "No: 'noindex' detected in 'robots' meta tag" , so indexation cant be done. I have no idea where noindex is coming from. 3. Robots.txt in search console still showing old file ( no noindex there ) I tried to submit new file but old one still coming up. When I click on "See live robots.txt" I get current robots. 4. I see that old page is still canonicalized and attempting to index redirected old page might be confusing google Hope someone can help to get the new page indexed! I really need it 🙂 Please ping me if you need more clarification. Thank you ! Thank you
Intermediate & Advanced SEO | | bgvsiteadmin1 -
Mobile Googlebot vs Desktop Googlebot - GWT reports - Crawl errors
Hi Everyone, I have a very specific SEO question. I am doing a site audit and one of the crawl reports is showing tons of 404's for the "smartphone" bot and with very recent crawl dates. If our website is responsive, and we do not have a mobile version of the website I do not understand why the desktop report version has tons of 404's and yet the smartphone does not. I think I am not understanding something conceptually. I think it has something to do with this little message in the Mobile crawl report. "Errors that occurred only when your site was crawled by Googlebot (errors didn't appear for desktop)." If I understand correctly, the "smartphone" report will only show URL's that are not on the desktop report. Is this correct?
Intermediate & Advanced SEO | | Carla_Dawson0 -
Robots.txt - Do I block Bots from crawling the non-www version if I use www.site.com ?
my site uses is set up at http://www.site.com I have my site redirected from non- www to the www in htacess file. My question is... what should my robots.txt file look like for the non-www site? Do you block robots from crawling the site like this? Or do you leave it blank? User-agent: * Disallow: / Sitemap: http://www.morganlindsayphotography.com/sitemap.xml Sitemap: http://www.morganlindsayphotography.com/video-sitemap.xml
Intermediate & Advanced SEO | | morg454540 -
How to recover google rank after changing the domain name?
I just started doing SEO for a new client. The case is a bit unique as they build a new website and for some reason lunched in under another domain name. Old name is foodstepsinasia.com and new one is foodstepsinasiatravel.com OLD one is a respected webites with 35 in MOZ page authority and with +15000 incomming link (104 root domains) NEW one is curently on 0 The programmer has just that build the new website has set it up so that when people write or find the old domain name it redirect to the front page of the new website with the new domain name. this caused that my friends lost a lot of their rankings was so I believ it was a very bad solution. But I also think I can get most of the old rankings back, but my question is what to do now to get as much back of the rankings as fast as possible?? A) I believe I must change the domain name back to foodstepsinasia.com on the new website ? O B) Should I on the old website try finding the url of the pages with most page authority and recreate these urls on the new website or should i redict them to a page with related content? Looking forward to feedback from someone who have experience with similar cases. Thanks!
Intermediate & Advanced SEO | | nm19770 -
Dilemma about "images" folder in robots.txt
Hi, Hope you're doing well. I am sure, you guys must be aware that Google has updated their webmaster technical guidelines saying that users should allow access to their css files and java-scripts file if it's possible. Used to be that Google would render the web pages only text based. Now it claims that it can read the css and java-scripts. According to their own terms, not allowing access to the css files can result in sub-optimal rankings. "Disallowing crawling of Javascript or CSS files in your site’s robots.txt directly harms how well our algorithms render and index your content and can result in suboptimal rankings."http://googlewebmastercentral.blogspot.com/2014/10/updating-our-technical-webmaster.htmlWe have allowed access to our CSS files. and Google bot, is seeing our webapges more like a normal user would do. (tested it in GWT)Anyhow, this is my dilemma. I am sure lot of other users might be facing the same situation. Like any other e commerce companies/websites.. we have lot of images. Used to be that our css files were inside our images folder, so I have allowed access to that. Here's the robots.txt --> http://www.modbargains.com/robots.txtRight now we are blocking images folder, as it is very huge, very heavy, and some of the images are very high res. The reason we are blocking that is because we feel that Google bot might spend almost all of its time trying to crawl that "images" folder only, that it might not have enough time to crawl other important pages. Not to mention, a very heavy server load on Google's and ours. we do have good high quality original pictures. We feel that we are losing potential rankings since we are blocking images. I was thinking to allow ONLY google-image bot, access to it. But I still feel that google might spend lot of time doing that. **I was wondering if Google makes a decision saying, hey let me spend 10 minutes for google image bot, and let me spend 20 minutes for google-mobile bot etc.. or something like that.. , or does it have separate "time spending" allocations for all of it's bot types. I want to unblock the images folder, for now only the google image bot, but at the same time, I fear that it might drastically hamper indexing of our important pages, as I mentioned before, because of having tons & tons of images, and Google spending enough time already just to crawl that folder.**Any advice? recommendations? suggestions? technical guidance? Plan of action? Pretty sure I answered my own question, but I need a confirmation from an Expert, if I am right, saying that allow only Google image access to my images folder. Sincerely,Shaleen Shah
Intermediate & Advanced SEO | | Modbargains1 -
Robots Disallow Backslash - Is it right command
Bit skeptical, as due to dynamic url and some other linkage issue, google has crawled url with backslash and asterisk character ex - www.xyz.com/\/index.php?option=com_product www.xyz.com/\"/index.php?option=com_product Now %5c is the encoded version of \ - backslash & %22 is encoded version of asterisk Need to know for command :- User-agent: * Disallow: \As am disallowing all backslash url through this - will it only remove the backslash url which are duplicates or the entire site,
Intermediate & Advanced SEO | | Modi0 -
Meta NoIndex tag and Robots Disallow
Hi all, I hope you can spend some time to answer my first of a few questions 🙂 We are running a Magento site - layered/faceted navigation nightmare has created thousands of duplicate URLS! Anyway, during my process to tackle the issue, I disallowed in Robots.txt anything in the querystring that was not a p (allowed this for pagination). After checking some pages in Google, I did a site:www.mydomain.com/specificpage.html and a few duplicates came up along with the original with
Intermediate & Advanced SEO | | bjs2010
"There is no information about this page because it is blocked by robots.txt" So I had added in Meta Noindex, follow on all these duplicates also but I guess it wasnt being read because of Robots.txt. So coming to my question. Did robots.txt block access to these pages? If so, were these already in the index and after disallowing it with robots, Googlebot could not read Meta No index? Does Meta Noindex Follow on pages actually help Googlebot decide to remove these pages from index? I thought Robots would stop and prevent indexation? But I've read this:
"Noindex is a funny thing, it actually doesn’t mean “You can’t index this”, it means “You can’t show this in search results”. Robots.txt disallow means “You can’t index this” but it doesn’t mean “You can’t show it in the search results”. I'm a bit confused about how to use these in both preventing duplicate content in the first place and then helping to address dupe content once it's already in the index. Thanks! B0 -
Block an entire subdomain with robots.txt?
Is it possible to block an entire subdomain with robots.txt? I write for a blog that has their root domain as well as a subdomain pointing to the exact same IP. Getting rid of the option is not an option so I'd like to explore other options to avoid duplicate content. Any ideas?
Intermediate & Advanced SEO | | kylesuss12