What should I block with a robots.txt file?
-
Hi Mozzers,
We're having a hard time getting our site indexed, and I have a feeling my dev team may be blocking too much of our site via our robots.txt file.
They say they have disallowed php and smarty files.
Is there any harm in allowing these pages?
Thanks!
-
Hi Andy, here you go: www.consumerbase.com/robots.txt
I know we want to block the .html files, but I am unsure about the other folders.
I guess I would need to know for certain from my programmers that none of our content is in there?
-
I'm not too hot on Smarty, but doesn't this generate the HTML templates?
However, this shouldn't cause a problem because the files that are being generated are html so as long as they have done this right, it should be fine.
Do you want to ping me the robots file or URL over and I will have a look for you?
Andy
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt was set to disallow for 14 days
We updated our website and accidentally overwrote our robots file with a version that prevented crawling ( "Disallow: /") We realized the issue 14 days later and replaced after our organic visits began to drop significantly and we quickly replace the robots file with the correct version to begin crawling again. With the impact to our organic visits, we have a few and any help would be greatly appreciated - Will the site get back to its original status/ranking ? If so .. how long would that take? Is there anything we can do to speed up the process ? Thanks
Intermediate & Advanced SEO | | jc42540 -
How long to re-index a page after being blocked
Morning all! I am doing some research at the moment and am trying to find out, just roughly, how long you have ever had to wait to have a page re-indexed by Google. For this purpose, say you had blocked a page via meta noindex or disallowed access by robots.txt, and then opened it back up. No right or wrong answers, just after a few numbers 🙂 Cheers, -Andy
Intermediate & Advanced SEO | | Andy.Drinkwater0 -
Application & understanding of robots.txt
Hello Moz World! I have been reading up on robots.txt files, and I understand the basics. I am looking for a deeper understanding on when to deploy particular tags, and when a page should be disallowed because it will affect SEO. I have been working with a software company who has a News & Events page which I don't think should be indexed. It changes every week, and is only relevant to potential customers who want to book a demo or attend an event, not so much search engines. My initial thinking was that I should use noindex/follow tag on that page. So, the pages would not be indexed, but all the links will be crawled. I decided to look at some of our competitors robots.txt files. Smartbear (https://smartbear.com/robots.txt), b2wsoftware (http://www.b2wsoftware.com/robots.txt) & labtech (http://www.labtechsoftware.com/robots.txt). I am still confused on what type of tags I should use, and how to gauge which set of tags is best for certain pages. I figured a static page is pretty much always good to index and follow, as long as it's public. And, I should always include a sitemap file. But, What about a dynamic page? What about pages that are out of date? Will this help with soft 404s? This is a long one, but I appreciate all of the expert insight. Thanks ahead of time for all of the awesome responses. Best Regards, Will H.
Intermediate & Advanced SEO | | MarketingChimp100 -
How to leverage browser cache a specific file
Hello all,
Intermediate & Advanced SEO | | asbchris
I am trying to figure out how to add leverage browser caching to these items. http://maps.googleapis.com/maps/api/js?v=3.exp&sensor=false&language=en http://ajax.googleapis.com/ajax/libs/webfont/1/webfont.js http://www.google-analytics.com/analytics.js Whats hard is I understand the purpose, but unlike a css file, how do you specify an expiration on an actual direct path file? Any help or link to get help is appreciated. Chris0 -
Robots.txt: how to exclude sub-directories correctly?
Hello here, I am trying to figure out the correct way to tell SEs to crawls this: http://www.mysite.com/directory/ But not this: http://www.mysite.com/directory/sub-directory/ or this: http://www.mysite.com/directory/sub-directory2/sub-directory/... But with the fact I have thousands of sub-directories with almost infinite combinations, I can't put the following definitions in a manageable way: disallow: /directory/sub-directory/ disallow: /directory/sub-directory2/ disallow: /directory/sub-directory/sub-directory/ disallow: /directory/sub-directory2/subdirectory/ etc... I would end up having thousands of definitions to disallow all the possible sub-directory combinations. So, is the following way a correct, better and shorter way to define what I want above: allow: /directory/$ disallow: /directory/* Would the above work? Any thoughts are very welcome! Thank you in advance. Best, Fab.
Intermediate & Advanced SEO | | fablau1 -
When you add 10.000 pages that have no real intention to rank in the SERP, should you: "follow,noindex" or disallow the whole directory through robots? What is your opinion?
I just want a second opinion 🙂 The customer don't want to loose any internal linkvalue by vaporizing link value though a big amount of internal links. What would you do?
Intermediate & Advanced SEO | | Zanox0 -
Why are these m. results showing as blocked?
If you go to http://bit.ly/173gdWK, you'll see that m. results are showing as blocked by robots.txt, but we don't have anything in our robots.txt file that specifies to block m. results. Any ideas why these URLs show as blocked?
Intermediate & Advanced SEO | | nicole.healthline0 -
C Block IP Links Strategy
Hi guys i run a web design company and have around 50 sites that i have designed most dont have links but to us i was considering adding a footer link that will link to a blog page within that site, each post on each site will have unique content about the project and about us as a design company. As you can see most of my ip address are c blocks, any advice here please, thanks in advance Example Ip list
Intermediate & Advanced SEO | | Will_Craig
abc.32.230.1
def.20.252.37
ghi.48.68.82
zz.32.229.131
zz.32.231.208
zz.32.253.87
xx.170.40.170
xx.170.40.172
xx.170.40.232
xx.170.40.247
xx.170.40.32
xx.170.43.200
xx.170.44.103
xx.170.44.105
xx.170.44.108
xx.170.44.111
xx.170.44.127
xx.170.44.137
xx.170.44.146
xx.170.44.157
xx.170.44.77
xx.170.44.81
xx.170.44.86
xx.170.44.95
xx.170.44.96 [question edited by staff to remove full IP addresses]0