How to prevent directory from being accessed by search engines?
-
Pretty much as the question says, is there any way to stop search engines from crawling a directory? I am working on a Wordpress installation for my site but don't want it to be listed in search engines until it's ready to be shown to the world. I know the simplest way is to password-protect the directory but I had some issues when I tried to implement that so I'd like to see if there's a way to do it without passwords. Thanks in advance.
-
But don't forget to remove that Disallow out of Robots.txt when you go live - if you want those pages to be indexed (and also the Meta-robots noindex nofollow).
Otherwise you might be pulling your hair out trying to figure out why none of your pages are getting indexed in the SERPs.
-
You're absolutely right! I left that part out. Thanks
-
The robots.txt file does not guarantee that your pages will not show up in search results! Your best bet after password protection is adding a NoIndex meta tag to you page headers.
Google have openly said that they obey this tag (Matt Cutts).
-
Xee,
It always help, and it is very easy to implement. This function to show the path to the sitemap ir very good.
-
It's not required to have the ending slash. At least, it works for us without it.
-
As it is, my site is just phpBB3 forums (www.bearsfansonline.com); would a sitemap really help that much?
-
If you don't have an robot.txt file, you need to include some important stuff first.
First, do you have a sitemap.xlm for your website? If not, its very important and you should creat it at: http://www.xml-sitemaps.com/
Create a robot.txt file and include the follow:
User-agent: * allow: / disallow: /directoryname
Sitemap: http://www.yousite.com/sitemap.xmlWith this you will inform all robots where is your sitemap. You should read more about robots.txt in this great post: http://www.seomoz.org/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts
-
shouldn't you put a slash at the end of the directory in the robots file?
you can create the robots file through the Google Webmaster Tools
-
I don't have a robots.txt file in my root. Do I just create a text file, put the above lines into it, and upload it to my root after changing the name?
-
I'm assuming you want all search engines blocked from this directory. If so, edit your robots.txt file to state the following. This will block all bots from accessing a folder/directory on your site
User-agent: *
Disallow: /directoryname
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Dynamic Canonical Tag for Search Results Filtering Page
Hi everyone, I run a website in the travel industry where most users land on a location page (e.g. domain.com/product/location, before performing a search by selecting dates and times. This then takes them to a pre filtered dynamic search results page with options for their selected location on a separate URL (e.g. /book/results). The /book/results page can only be accessed on our website by performing a search, and URL's with search parameters from this page have never been indexed in the past. We work with some large partners who use our booking engine who have recently started linking to these pre filtered search results pages. This is not being done on a large scale and at present we only have a couple of hundred of these search results pages indexed. I could easily add a noindex or self-referencing canonical tag to the /book/results page to remove them, however it’s been suggested that adding a dynamic canonical tag to our pre filtered results pages pointing to the location page (based on the location information in the query string) could be beneficial for the SEO of our location pages. This makes sense as the partner websites that link to our /book/results page are very high authority and any way that this could be passed to our location pages (which are our most important in terms of rankings) sounds good, however I have a couple of concerns. • Is using a dynamic canonical tag in this way considered spammy / manipulative? • Whilst all the content that appears on the pre filtered /book/results page is present on the static location page where the search initiates and which the canonical tag would point to, it is presented differently and there is a lot more content on the static location page that isn’t present on the /book/results page. Is this likely to see the canonical tag being ignored / link equity not being passed as hoped, and are there greater risks to this that I should be worried about? I can’t find many examples of other sites where this has been implemented but the closest would probably be booking.com. https://www.booking.com/searchresults.it.html?label=gen173nr-1FCAEoggI46AdIM1gEaFCIAQGYARS4ARfIAQzYAQHoAQH4AQuIAgGoAgO4ArajrpcGwAIB0gIkYmUxYjNlZWMtYWQzMi00NWJmLTk5NTItNzY1MzljZTVhOTk02AIG4AIB&sid=d4030ebf4f04bb7ddcb2b04d1bade521&dest_id=-2601889&dest_type=city& Canonical points to https://www.booking.com/city/gb/london.it.html In our scenario however there is a greater difference between the content on both pages (and booking.com have a load of search results pages indexed which is not what we’re looking for) Would be great to get any feedback on this before I rule it out. Thanks!
Technical SEO | | GAnalytics1 -
My site Has Penalized By google Search Result Without Any Spam Score.
I Recently Make a Site Gizmocombot.com. tHE aITE has NO spam Record NO lousy BACKLINK.it has all unique article can anyone tell us how we can unpenalized our site from google webmaster and google search Result. i attcead a screenshot as well yoou need. 3nzmALp
Technical SEO | | litoginamaaba3332 -
Soft 404 in Search Console
Search console is showing quite a lot of soft 404 pages on my site, but when I click on the links, the pages are all there. Is there a reason for this? It's a pretty big site - I'm getting 141 soft 404s from about 20,000 pages
Technical SEO | | abisti20 -
Some competitors have a thumbnail in Google search results
I've noticed that a few of my top competitors have a small photo (thumbnail) next to their listing. I'm sure it's not a coincidence that they are ranked top for the search phrase too. Is this really a help and how can it be done? Many thanks, Iain.
Technical SEO | | iainmoran0 -
How does a search engine bot navigate past a .PDF link?
We have a large number of product pages that contain links to a .pdf of the technical specs for that product. These are all set up to open in a new window when the end user clicks. If these pages are being crawled, and a bot follows the link for the .pdf, is there any way for that bot to continue to crawl the site, or does it get stuck on that dangling page because it doesn't contain any links back to the site (it's a .pdf) and the "back" button doesn't work because the page opened in a new window? If this situation effectively stops the bot in its tracks and it can't crawl any further, what's the best way to fix this? 1. Add a rel="nofollow" attribute 2. Don't open the link in a new window so the back button remains finctional 3. Both 1 and 2 or 4. Create specs on the page instead of relying on a .pdf Here's an example page: http://www.ccisolutions.com/StoreFront/product/mackie-cfx12-mkii-compact-mixer - The technical spec .pdf is located under the "Downloads" tab [the content is all on one page in the source code - the tabs are just a design element] Thoughts and suggestions would be greatly appreciated. Dana
Technical SEO | | danatanseo0 -
Do the search engines penalise you for images being WATERMARKED?
Our site contains a library of thousands of images which we are thinking of watermarking. Does anyone know if Google penalise sites for this or is it best practice in order to protect revenues? As watermarking these images makes them less shareable (but protects revenues) i was thinking Google might then penalise us - which might affect traffic Any ideas?
Technical SEO | | KevinDunne0 -
Un-Indexing a Page without robots.txt or access to HEAD
I am in a situation where a page was pushed live (Went live for an hour and then taken down) before it was supposed to go live. Now normally I would utilize the robots.txt or but I do not have access to either and putting a request in will not suffice as it is against protocol with the CMS. So basically I am left to just utilizing the and I cannot seem to find a nice way to play with the SE to get this un-indexed. I know for this instance I could go to GWT and do it but for clients that do not have GWT and for all the other SE's how could I do this? Here is the big question here: What if I have a promotional page that I don't want indexed and am met with these same limitations? Is there anything to do here?
Technical SEO | | DRSearchEngOpt0 -
Google indexing thousands crazy search results with %25253
In GWT I started seeing very strange pages indexed a few weeks, and Google is no reporting over 21,000 of pages (blocked by robots.txt) with weird URLs like this: http://www.francesphotography.com/?s=no-results:no-results%25252525252525253Ano-results%2525252525252525253Ano-results%252525252525252525253Ano-results%252525252525252525253Ano-results%252525252525252525253Ano-results%252525252525252525253Ano-results%25252525252525252525253Ano-results%25252525252525252525253Ano-results%2525252525252525252525253Adanna&cat=no-results http://www.francesphotography.com/?s=no-results:no-results%2525253Ano-results%25252525253Ano-results%25252525253Ano-results%25252525253Ano-results%2525252525253Ano-results%25252525252525253Ano-results%25252525252525253Ano-results%25252525252525253Adanna&cat=no-results The current robots.txt looks like this: User-agent: *
Technical SEO | | BoulderJoe
Disallow: /wp-content Disallow: /wp-admin Disallow: /wp-includes
Disallow: /data
Disallow: /slideshows
Disallow: /page/*/?s=
Disallow: /?s=
Disallow: /search This website is running an up to date WP install with Yoast's Google Analytics and SEO plug-in. I can't point to anything specific that happened with the site when these URLs started appearing even after I modified the robots.txt. What can be done to try and stop Google from creating and indexing these goofy URLs? I see lots of sites having this issue when I search in Google, but no one seems to have a solution.0