Block all but one URL in a directory using robots.txt?
-
Is it possible to block all but one URL with robots.txt?
for example domain.com/subfolder/example.html, if we block the /subfolder/ directory we want all URLs except for the exact match url domain.com/subfolder to be blocked.
-
Robots.txt files are sequential, which means they follow directives in the order they appear. So if two directives conflict, they will follow the last one.
So the simple way to do this is to disallow all files first, then allow the directory you want next. It would look something like this:
User-agent: *
Disallow: /User-agent: *
Allow: /testCaveat: This is NOT the way robots.txt is supposed to work. By design, robots.txt is designed for disallowing, and technically you shouldn't ever have to use it for allowing. That said, this should work pretty well.
You can check your work in Google Webmaster, which has a robots.txt checker. Site Configuration > Crawler Access. Just type in your proposed robots.txt, then a test URL and you should be good to go.
Hope this helps!
-
According to my knowledge this possibility does not exist. One fast method to get over this is to get a crawler program to crawl your urls, so that you can quickly copy out all url in the folder paste in in the robots.txt and leave aout the one that you want in the index.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Trailing Slashes on URLs
Hi we currently have a site on Wordpress which has two version of each URL trailing slash on URLs and one without it. Example: www.domain.com/page (preferred version - based on link data) www.domain.com/page**/** The non-slash version of the URL has most of the external links pointing to them, so we are going to pick that as the preferred version. However, currently, each version of every URL has rel canonical tag pointing to the non-preferred version. E.g. www.domain.com/page the rel canonical tag is: www.domain.com/page/ What would be the best way to clean up this setup? Cheers.
Intermediate & Advanced SEO | | cathywix0 -
Large robots.txt file
We're looking at potentially creating a robots.txt with 1450 lines in it. This will remove 100k+ pages from the crawl that are all old pages (I know, the ideal would be to delete/noindex but not viable unfortunately) Now the issue i'm thinking is that a large robots.txt will either stop the robots.txt from being followed or will slow our crawl rate down. Does anybody have any experience with a robots.txt of that size?
Intermediate & Advanced SEO | | ThomasHarvey0 -
Robots.txt - Do I block Bots from crawling the non-www version if I use www.site.com ?
my site uses is set up at http://www.site.com I have my site redirected from non- www to the www in htacess file. My question is... what should my robots.txt file look like for the non-www site? Do you block robots from crawling the site like this? Or do you leave it blank? User-agent: * Disallow: / Sitemap: http://www.morganlindsayphotography.com/sitemap.xml Sitemap: http://www.morganlindsayphotography.com/video-sitemap.xml
Intermediate & Advanced SEO | | morg454540 -
Hreflang targeted website using the root directory's description & title
Hi there, Recently I applied the href lang tags like so: Unfortunately, the Australian site uses the same description and title as the US site (which was the root directory initially), am i doing something wrong? Would appreciate any response, thanks!
Intermediate & Advanced SEO | | oliverkuchies0 -
Why is this url redirecting to our site?
I was doing an audit on our site and searching for duplicate content using some different terms from each of our pages. I came across the following result: www.sswug.org/url/32639 redirects to our website. Is that normal? There are hundreds of these url's in google all with the exact same description. I thought it was odd. Any ideas and what is the consequence of this?
Intermediate & Advanced SEO | | Sika220 -
Google tagged URL an overly-dynamic URL?
I'm reviewing my campaign, and spotted the overly-dynamic URL box showing a few links. Reviewing it, they are my Google Tagged URLs (utm_source, utm_medium_utm_campaign etc) I've turned some internal links to Google Tagged URLs but should these cause concern?
Intermediate & Advanced SEO | | Bio-RadAbs0 -
When using ALT tags - are spaces, hyphens or underscores preferred by Google when using multiple words?
when plugging ALT tags into images, does Google prefer spaces, hyphens, or underscores? I know with filenames, hyphens or underscores are preferred and spaces are replaced with %20. Thoughts? Thanks!
Intermediate & Advanced SEO | | BrooklynCruiser3 -
Are URL shorteners building domain authority everytime someone uses a link from their service?
My understanding of domain authority is that the more links pointing to any page / resource on a domain, the greater the overall domain authority (and weight passed from outbound links on the domain) is. Because URL shorteners create links on their own domain that redirect to an off-domain page but link "to" an on-domain URL, are they gaining domain authority each time someone publishes a shortened link from their service? Or does Google penalize these sites specifically, or links that redirect in general? Or am I missing something else?
Intermediate & Advanced SEO | | Jay.Neely0