Do I have a robots.txt problem?
-
I have the little yellow exclamation point under my robots.txt fetch as you can see here- http://imgur.com/wuWdtvO
This version shows no errors or warnings- http://imgur.com/uqbmbug
Under the tester I can currently see the latest version. This site hasn't changed URLs recently, and we haven't made any changes to the robots.txt file for two years. This problem just started in the last month. Should I worry?
-
Today it has a green check mark, and absolutely no changes were made to the website since I asked this question.
-
It could be that your server had a hard time when Google tried to view your robots.txt file that's why it wouldn't be able to fetch it. As long as this issue doesn't prevent Google anymore in the future it's not much to worry about.
-
That would make me feel more confident of a false error being reported. Time to closely monitor the crawl logs, look at server stats, and keep an eye on GWT for a change in the reporting/indexing. I would also go into the GWT forums and post, see if anyone is reporting a similar error these past couple days.
-
I can't post the domain but I know it is accessible.
When I go to the tester it shows the live robots.txt with no problems. I also can look at the server logs and see that it is being crawled, but being crawled less then Bing Crawls. Also the Bing Webmaster Tools is showing no problems.
-
Can you post your domain? Manually checking the robots.txt file would help.
I've checked many of my GWT accounts and I am not showing a sudden robots.txt error. It could be a false error, but I would take anything with the robots.txt file seriously. You'll want to make sure that it is in fact accessible to all the crawlers desired.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Blocked jquery in Robots.txt, Any SEO impact?
I've heard that Google is now indexing links and stuff available in javascript and jquery. My webmastertools is showing that some links are blocked in robots.txt of jquery. Sorry I'm not a developer or designer. I want to know is there any impact of this on my SEO? and also how can I unblock it for the robots? Check this screenshot: http://i.imgur.com/3VDWikC.png
Technical SEO | | hammadrafique0 -
Indexing Problem
My URL is: www.memovalley.comWe have submitted our sitemap last month and we are having issues seeing our URLs listed in the search results. Even though our sitemaps contain over 200 URLs, we only currently only have 7 listed (excluding blog.memovalley.com).Can someone help us with this? | |
Technical SEO | | Memovalley
| | | | It looks like Googlebot has timed out, at least once, for one of our URLs. Why is Googlebot timing out? My server is located at Amazon WS, in North Carolina and it is a small instance. Could Google be querying multiple URLs at the same time and jamming my servers? Could it be becauseThanks for your help!0 -
Problem with re-direction
Please help me to solve a problem with redirection. I re-do the
Technical SEO | | NadiaFL
site and move it to new domain page-by-page as recommended. I use WordPress Redirect plugin. I did everything as written 10
days ago but don't see redirection. For example, old page http://aurora17.com/?page_id=2485 New page http://njcruise.org/alaska-cruise-tour/ Where is a problem? How to solve it?0 -
ECommerce site - Duplicate pages problem.
We have an eCommerce site with multiple products being displayed on a number of pages. We use rel="next" and rel="prev" and have a display ALL which I understand Google should automatically be able to find. Should we also being using a Canonical tag as well to tell google to give authority to the first page or the All Pages. Or was the use of the next and prev rel tags that we currently do adequate. We currently display 20 products per page, we were thinking of increasing this to make fewer pages but they would be better as this which would make some later product pages redundant . If we add 301 redirects on the redundant pages, does anyone know of the sort of impact this might cause to traffic and seo ?. General thoughts if anyone has similar problems welcome
Technical SEO | | SarahCollins0 -
Wordpress Robots.txt Sitemap submission?
Alright, my question comes directly from this article by SEOmoz http://www.seomoz.org/learn-seo/r... Yes, I have submitted the sitemap to google, bing's webmaster tools and and I want to add the location of our site's sitemaps and does it mean that I erase everything in the robots.txt right now and replace it with? <code>User-agent: * Disallow: Sitemap: http://www.example.com/none-standard-location/sitemap.xml</code> <code>???</code> because Wordpress comes with some default disallows like wp-admin, trackback, plugins. I have also read this, but was wondering if this is the correct way to add sitemap on Wordpress Robots.txt. [http://www.seomoz.org/q/removing-...](http://www.seomoz.org/q/removing-robots-txt-on-wordpress-site-problem) I am using Multisite with Yoast plugin so I have more than one sitemap.xml to submit Do I erase everything in Robots.txt and replace it with how SEOmoz recommended? hmm that sounds not right. like <code> <code>
Technical SEO | | joony2008
<code>User-agent: *
Disallow: </code> Sitemap: http://www.example.com/sitemap_index.xml</code> <code>``` Sitemap: http://www.example.com/sub/sitemap_index.xml ```</code> <code>?????????</code> ```</code>0 -
Internal search : rel=canonical vs noindex vs robots.txt
Hi everyone, I have a website with a lot of internal search results pages indexed. I'm not asking if they should be indexed or not, I know they should not according to Google's guidelines. And they make a bunch of duplicated pages so I want to solve this problem. The thing is, if I noindex them, the site is gonna lose a non-negligible chunk of traffic : nearly 13% according to google analytics !!! I thought of blocking them in robots.txt. This solution would not keep them out of the index. But the pages appearing in GG SERPS would then look empty (no title, no description), thus their CTR would plummet and I would lose a bit of traffic too... The last idea I had was to use a rel=canonical tag pointing to the original search page (that is empty, without results), but it would probably have the same effect as noindexing them, wouldn't it ? (never tried so I'm not sure of this) Of course I did some research on the subject, but each of my finding recommanded one of the 3 methods only ! One even recommanded noindex+robots.txt block which is stupid because the noindex would then be useless... Is there somebody who can tell me which option is the best to keep this traffic ? Thanks a million
Technical SEO | | JohannCR0 -
Warnings for blocked by blocked by meta-robots/meta robots Nofollow...how to resolve?
Hello, I see hundreds of notices for blocked by meta-robots/meta robots nofollow and it appears it is linked to the comments on my site which I assume I would not want to be crawled. Is this the case and these notices are actually a positive thing? Please advise how to clear them up if these notices can be potentially harmful for my SEO. Thanks, Talia
Technical SEO | | M80Marketing0 -
Is robots.txt a must-have for 150 page well-structured site?
By looking in my logs I see dozens of 404 errors each day from different bots trying to load robots.txt. I have a small site (150 pages) with clean navigation that allows the bots to index the whole site (which they are doing). There are no secret areas I don't want the bots to find (the secret areas are behind a Login so the bots won't see them). I have used rel=nofollow for internal links that point to my Login page. Is there any reason to include a generic robots.txt file that contains "user-agent: *"? I have a minor reason: to stop getting 404 errors and clean up my error logs so I can find other issues that may exist. But I'm wondering if not having a robots.txt file is the same as some default blank file (or 1-line file giving all bots all access)?
Technical SEO | | scanlin0