Large robots.txt file
-
We're looking at potentially creating a robots.txt with 1450 lines in it. This will remove 100k+ pages from the crawl that are all old pages (I know, the ideal would be to delete/noindex but not viable unfortunately)
Now the issue i'm thinking is that a large robots.txt will either stop the robots.txt from being followed or will slow our crawl rate down.
Does anybody have any experience with a robots.txt of that size?
-
Answered my own questions:
https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt?csw=1#file-format
"A maximum file size may be enforced per crawler. Content which is after the maximum file size may be ignored. Google currently enforces a size limit of 500kb."
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Canonicals for Splitting up large pagination pages
Hi there, Our dev team are looking at speeding up load times and making pages easier to browse by splitting up our pagination pages to 10 items per page rather than 1000s (exact number to be determined) - sounds like a great idea, but we're little concerned about the canonicals on this one. at the moment we rel canonical (self) and prev and next. so b is rel b, prev a and next c - for each letter continued. Now the url structure will be a1, a(n+), b1, b(n+), c1, c(n+). Should we keep the canonicals to loop through the whole new structure or should we loop each letter within itself? Either b1 rel b1, prev a(n+), next b2 - even though they're not strictly continuing the sequence. Or a1 rel a1, next a2. a2 rel a2, prev a1, next a3 | b1 rel b1, next b2, b2 rel b2, prev b1, next b3 etc. Would love to hear your points of view, hope that all made sense 🙂 I'm leaning towards the first one even though it's not continuing the letter sequence, but because it's looping the alphabetically which is currently working for us already. This is an example of the page we're hoping to split up: https://www.world-airport-codes.com/alphabetical/airport-name/b.html
Intermediate & Advanced SEO | | Fubra0 -
Pages blocked by robots
**yazılım sürecinde yapılan bir yanlışlıktı.** Sorunu hızlı bir şekilde nasıl çözebilirim? bana yardım et. ```[XTRjH](https://imgur.com/a/XTRjH)
Intermediate & Advanced SEO | | mihoreis0 -
Help with Robots.txt On a Shared Root
Hi, I posted a similar question last week asking about subdomains but a couple of complications have arisen. Two different websites I am looking after share the same root domain which means that they will have to share the same robots.txt. Does anybody have suggestions to separate the two on the same file without complications? It's a tricky one. Thank you in advance.
Intermediate & Advanced SEO | | Whittie0 -
What happens if one remove the disavow file from a non penalised site
What happens if one remove the disavow file from a site that has not received a manual penalty from Google. Although the site did suffer from a drop in traffic and rankings.
Intermediate & Advanced SEO | | Taiger0 -
Robots.txt: how to exclude sub-directories correctly?
Hello here, I am trying to figure out the correct way to tell SEs to crawls this: http://www.mysite.com/directory/ But not this: http://www.mysite.com/directory/sub-directory/ or this: http://www.mysite.com/directory/sub-directory2/sub-directory/... But with the fact I have thousands of sub-directories with almost infinite combinations, I can't put the following definitions in a manageable way: disallow: /directory/sub-directory/ disallow: /directory/sub-directory2/ disallow: /directory/sub-directory/sub-directory/ disallow: /directory/sub-directory2/subdirectory/ etc... I would end up having thousands of definitions to disallow all the possible sub-directory combinations. So, is the following way a correct, better and shorter way to define what I want above: allow: /directory/$ disallow: /directory/* Would the above work? Any thoughts are very welcome! Thank you in advance. Best, Fab.
Intermediate & Advanced SEO | | fablau1 -
Meta canonical or simply robots.txt other domain names with same content?
Hi, I'm working with a new client who has a main product website. This client has representatives who also sells the same products but all those reps have a copy of the same website on another domain name. The best thing would probably be to shut down the other (same) websites and redirect 301 them to the main, but that's impossible in the minding of the client. First choice : Implement a conical meta for all the URL on all the other domain names. Second choice : Robots.txt with disallow for all the other websites. Third choice : I'm really open to other suggestions 😉 Thank you very much! 🙂
Intermediate & Advanced SEO | | Louis-Philippe_Dea0 -
301 redirect or Robots.txt on an interstatial page
Hey guys, I have an affiliate tracking system that works like this : an affiliate puts up a certain code on his site, for example : www.domain.com/track/aff_id This url leads to a page where the hit is counted, analysed and then 302 redirects to my sales page with the affiliates ID in the url : www.mysalespage.com/?=aff_id. However, we've noticed recently that one affiliate seems to be ranking for our own name and the url google indexed was his tracking url (domain.com/track/aff_id). Which is strange because there is absolutely nothing on that page, its just an interstatial page so that our stats tracking software can properly filter hits. To remove the affiliate's url from showing up in the serps, I've come up with 2 solutions : 1 - Change the redirect to a 301 redirect on his track page. 2 - Change our robots.txt page to block all domain.com/track/ pages from being indexed. My question is : if I 301 redirect instead of 302, will I keep the affiliates from outranking me for my own name AND pass on link juice or should I simply block google from crawling the interstatial tracking pages?
Intermediate & Advanced SEO | | CrakJason0 -
What are Benefits to Develop Large HTML Sitemap?
I've developed very simple HTML sitemap on Vista Stores. Today, I was checking Magento extensions and come to know about such a great extension. That will help me to create such a large HTML sitemap on my website similar to following one. http://wiredsport.com/sitemap/ http://www.breathalyzers.com/sitemap/ http://slindi.com/sitemap/ Which is best structure for HTML sitemap & Which are benefits to develop big HTML sitemap with all pages?
Intermediate & Advanced SEO | | CommercePundit0