How long will Google take to read my robots.txt after updating?
-
I updated www.egrecia.es/robots.txt two weeks ago and I still haven't solved Duplicate Title and Content on the website.
The Google SERP doesn't show those urls any more but SEOMOZ Crawl Errors nor Google Webmaster Tools recognize the change.
How long will it take?
-
What I mean is the website logs:
66.249.73.219 - - [21/May/2012:21:50:58 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [21/May/2012:21:53:00 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [21/May/2012:22:05:33 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [21/May/2012:22:50:58 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [21/May/2012:23:01:31 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [21/May/2012:23:44:15 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [21/May/2012:23:50:58 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [22/May/2012:00:16:58 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [22/May/2012:00:46:02 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [22/May/2012:00:50:59 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [22/May/2012:01:24:08 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [22/May/2012:01:51:00 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [22/May/2012:01:51:17 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [22/May/2012:02:32:28 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [22/May/2012:02:50:59 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [22/May/2012:02:56:28 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [22/May/2012:03:40:58 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [22/May/2012:03:51:00 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.83.124 - - [22/May/2012:04:01:29 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
72.21.88.227 - - [22/May/2012:04:38:59 -0700] "GET /robots.txt HTTP/1.1" 304 - "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.206 - - [22/May/2012:04:43:06 -0700] "GET /robots.txt HTTP/1.1" 301 239 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)"
66.249.73.219 - - [22/May/2012:04:51:02 -0700] "GET /robots.txt HTTP/1.1" 200 435 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" -
Thanks Alan, so to see the log you enter the cache version of the url?
-
Hello Christian.
It depends on many things.
In my logs, I see four googlebots today. Each one has read the robots.txt at hourly intervals.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Whats up with the last google update.
I have numerous clients who were at the top of page in the top 3 spots. They all dropped to page 3 or 4 or 2 and now they are number 1 in maps or in the top 3. Content is great on all these sites. backlinks are high quality and we do not build high quantity, we always focus on quality. the sites have authorship information. trust . we have excellent content written by professionals in the industry for each of the websites. The sites load super fast. they are very mobile friendly. we have CDN installed. content is organized per topic. all of our citations are setup properly and no duplicates, or missing citations. code is good on the websites. we do not have anchor text links pointing to the site from gust posts or whatever. we have plenty of content. our DA/PA is great. Audits of the website are great. I've been doing this a long time and ive never been so dumb founded as to what google has done this time. Or better yet what exactly is wrong with our clients websites today that was working perfectly for the last 5 years. I really am getting frustrated. im comparing my sites to competitors and everything's better. Please someone guide me here and tell me what im missing or tell me what you have done to recover from this nonsense.
Intermediate & Advanced SEO | | waqid0 -
Wildcarding Robots.txt for Particular Word in URL
Hey All, So I know that this isn't a standard robots.txt, I'm aware of how to block or wildcard certain folders but I'm wondering whether it's possible to block all URL's with a certain word in it? We have a client that was hacked a year ago and now they want us to help remove some of the pages that were being autogenerated with the word "viagra" in it. I saw this article and tried implementing it https://builtvisible.com/wildcards-in-robots-txt/ and it seems that I've been able to remove some of the URL's (although I can't confirm yet until I do a full pull of the SERPs on the domain). However, when I test certain URL's inside of WMT it still says that they are allowed which makes me think that it's not working fully or working at all. In this case these are the lines I've added to the robots.txt Disallow: /*&viagra Disallow: /*&Viagra I know I have the solution of individually requesting URL's to be removed from the index but I want to see if anybody has every had success with wildcarding URL's with a certain word in their robots.txt? The individual URL route could be very tedious. Thanks! Jon
Intermediate & Advanced SEO | | EvansHunt0 -
Now that Google will be indexing Twitter, are Twitter backlinks likely to effect website rank in the SERPs?
About a year (or 2) ago, Matt Cutts said that Twitter and FB have no effect on website rank, in part because Google can't get to the content. Now that Google will be indexing Twitter (again), do we expect that links in twitter posts will be useful backlinks for improving SERP rank?
Intermediate & Advanced SEO | | Thriveworks-Counseling1 -
Huge increase in server errors and robots.txt
Hi Moz community! Wondering if someone can help? One of my clients (online fashion retailer) has been receiving huge increase in server errors (500's and 503's) over the last 6 weeks and it has got to the point where people cannot access the site because of server errors. The client has recently changed hosting companies to deal with this, and they have just told us they removed the DNS records once the name servers were changed, and they have now fixed this and are waiting for the name servers to propagate again. These errors also correlate with a huge decrease in pages blocked by robots.txt file, which makes me think someone has perhaps changed this and not told anyone... Anyone have any ideas here? It would be greatly appreciated! 🙂 I've been chasing this up with the dev agency and the hosting company for weeks, to no avail. Massive thanks in advance 🙂
Intermediate & Advanced SEO | | labelPR0 -
How long does internationlisation take to be indexed correctly
Hi guys I have had a UK site that has been indexed in Google for some time. Recently we started targeting Ireland and so I created a folder to do this (domain.com/ireland/) As well as adding an /irland/ folder I created a hreflang sitemap and in Webmaster Tools I specified that .com/ireland/ targets Ireland and .com targets UK. However this was all two weeks ago and Im still not seeing the Irish pages start ranking in Google.ie and was hoping one of you guys would be able to help me out? How long should it take for these pages to start appearing in the relevant country specific search engine? Deepcrawl states that the Hreflang is correct as well so Im just a bit worried that Ive missed something glaringly obvious! Thanks
Intermediate & Advanced SEO | | AndrewAkesson0 -
Robots.txt assistance
I want to block all the inner archive news pages of my website in robots.txt - we don't have R&D capacity to set up rel=next/prev or create a central page that all inner pages would have a canonical back to, so this is the solution. The first page I want indexed reads:
Intermediate & Advanced SEO | | theLotter
http://www.xxxx.news/?p=1 all subsequent pages that I want blocked because they don't contain any new content read:
http://www.xxxx.news/?p=2
http://www.xxxx.news/?p=3
etc.... There are currently 245 inner archived pages and I would like to set it up so that future pages will automatically be blocked since we are always writing new news pieces. Any advice about what code I should use for this? Thanks!0 -
Dropped Out of Google and Bing
I am helping with a site that at one time I had on page 1 for Google/Bing. Site started to slip in rankings, then someone else did a makeover of the store and botched things by renaming pages, having errors in pages (multiple head/body), mismatch page names from sitemap, etc. Site slipped to page 4/5. I righted things, fixed duplication using canonicalization, made some other changes. Now site is gone completely from Google/Bing for desired keyword. No penalties. Site still shows if do search on domain name. Site is www.plussizeplum.com (plus size lingerie, sorry), keyword target is plus size lingerie. Anyone have any clues, tips, etc on why we fell off the face of the earth? Page Authority/Domain Authority are both comparable to most of the page 1/2 sites for same thing. Thanks for any advice.
Intermediate & Advanced SEO | | dlcohen0