What are your thoughts on security of placing CMS-related folders in a robots.txt file?
-
So I was just about to add a whole heap of CMS-related folders to my robots.txt file to exclude them from search, and thought "hey, I'm publicly telling people where my admin folders are"...surely that's not right?!
Should I leave them out of the robots.txt file, and hope for the best that they never get indexed? Should I use noindex meta data on every page?
What are people's thoughts?
Thanks,
James
PS. I know this is similar to lots of other discussions around meta noindex vs. robots.txt, but I'm after specific thoughts around the security aspect of listing your admin folders in a robots.txt file...
-
surly your admin folders are secured?, it would not matter if someone knows where they are.
-
As a rule, you want to avoid using robots.txt files whenever possible. It does not consistently protect you from crawlers and when it does block crawlers it kills any PR on those pages.
If you can block those pages with a noindex tag, it would be a preferable solution.
With respect to security for a CMS site, it really needs to be a comprehensive effort. Many site owners take a couple steps and then have a false-sense of security. Here are a few thoughts:
-
try the site address with /administrator after it to access Joomla and other sites
-
try the site address or blog with /wp-admin/ after it to access Joomla sites
-
make up a webpage and try accessing it to view the site's 404 page
-
right-click on a page and choose View Page Source. Often you will see the name of the CMS clearly listed. Other times you will see clear clues such as /wp/ in folder names. Other times you will find unique extensions such as Yoast SEO which will give you an idea of the CMS
Once a bad guy knows which CMS is in use, they know the default folder structure and more. The point is it requires a lot more effort then most people realize to hide the CMS in use. I applaud your effort, but be very thorough about it. There is a lot more involved then simply covering your robots.txt file.
-
-
I found three options for you: http://www.techiecorner.com/106/how-to-disable-directory-browsing-using-htaccess-apache-web-server/
I think if you do it with.htacces that is a folder specific file than nobody will be able to detect where admin contet is located.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I have two robots.txt pages for www and non-www version. Will that be a problem?
There are two robots.txt pages. One for www version and another for non-www version though I have moved to the non-www version.
Technical SEO | | ramb0 -
Should I block Map pages with robots.txt?
Hello, I have a website that was started in 1999. On the website I have map pages for each of the offices listed on my site, for which there are about 120. Each of the 120 maps is in a whole separate html page. There is no content in the page other than the map. I know all of the offices love having the map pages so I don't want to remove the pages. So, my question is would these pages with no real content be hurting the rankings of the other pages on our site? Therefore, should I block the pages with my robots.txt? Would I also have to remove these pages (in webmaster tools?) from Google for blocking by robots.txt to really work? I appreciate your feedback, thanks!
Technical SEO | | imaginex0 -
Robots.txt
Hello, My client has a robots.txt file which says this: User-agent: * Crawl-delay: 2 I put it through a robots checker which said that it must have a **disallow command**. So should it say this: User-agent: * Disallow: crawl-delay: 2 What effect (if any) would not having a disallow command make? Thanks
Technical SEO | | AL123al0 -
Robots.txt
Google Webmaster Tools say our website's have low-quality pages, so we have created a robots.txt file and listed all URL’s that we want to remove from Google index. Is this enough for the solve problem?
Technical SEO | | iskq0 -
How to redirect my .com/blog to my server folder /blog ?
Hello SEO Moz ! Always hard to post something serious for the 04.01 but anyway let's try ! I'm releasing Joomla websites website.com, website.com/fr, website.com/es and so on. Usually i have the following folders on my server [ROOT]/com [ROOT]/com/fr [ROOT]/com/es However I would like to get the following now (for back up and security purpose). [ROOT]/com [ROOT]/es [ROOT]/fr So now what can I do (I gues .htaccess) to open the folder [ROOT]/es when people clic on website.com/es ? It sounds stupid but I really don't know. I found this on internet but did not answer my needs. .htaccess RewriteEngine On
Technical SEO | | AymanH
RewriteCond %{REQUEST_URI} !(^/fr/.) [NC]
RewriteRule ^(.)$ /sites/fr/$1 [L,R=301] Tks a lot ! Florian0 -
Sub-domain or sub-folder for a blog?
Traditional thinking suggests sub-domains are treated as separate sites and so don't pass on link juice, but I've heard mixed opinions. I'm very much a believer in sub-folders but I'm interested to hear some other opinions. Thoughts?
Technical SEO | | underscorelive0 -
Google insists robots.txt is blocking... but it isn't.
I recently launched a new website. During development, I'd enabled the option in WordPress to prevent search engines from indexing the site. When the site went public (over 24 hours ago), I cleared that option. At that point, I added a specific robots.txt file that only disallowed a couple directories of files. You can view the robots.txt at http://photogeardeals.com/robots.txt Google (via Webmaster tools) is insisting that my robots.txt file contains a "Disallow: /" on line 2 and that it's preventing Google from indexing the site and preventing me from submitting a sitemap. These errors are showing both in the sitemap section of Webmaster tools as well as the Blocked URLs section. Bing's webmaster tools are able to read the site and sitemap just fine. Any idea why Google insists I'm disallowing everything even after telling it to re-fetch?
Technical SEO | | ahockley0 -
How long does it take for traffic to bounce back from and accidental robots.txt disallow of root?
We accidentally uploaded a robots.txt disallow root for all agents last Tuesday and did not catch the error until yesterday.. so 6 days total of exposure. Organic traffic is down 20%. Google has since indexed the correct version of the robots.txt file. However, we're still seeing awful titles/descriptions in the SERPs and traffic is not coming back. GWT shows that not many pages were actually removed from the index but we're still seeing drastic rankings decreases. Anyone been through this? Any sort of timeline for a recovery? Much appreciated!
Technical SEO | | bheard0