What are your thoughts on security of placing CMS-related folders in a robots.txt file?
-
So I was just about to add a whole heap of CMS-related folders to my robots.txt file to exclude them from search, and thought "hey, I'm publicly telling people where my admin folders are"...surely that's not right?!
Should I leave them out of the robots.txt file, and hope for the best that they never get indexed? Should I use noindex meta data on every page?
What are people's thoughts?
Thanks,
James
PS. I know this is similar to lots of other discussions around meta noindex vs. robots.txt, but I'm after specific thoughts around the security aspect of listing your admin folders in a robots.txt file...
-
surly your admin folders are secured?, it would not matter if someone knows where they are.
-
As a rule, you want to avoid using robots.txt files whenever possible. It does not consistently protect you from crawlers and when it does block crawlers it kills any PR on those pages.
If you can block those pages with a noindex tag, it would be a preferable solution.
With respect to security for a CMS site, it really needs to be a comprehensive effort. Many site owners take a couple steps and then have a false-sense of security. Here are a few thoughts:
-
try the site address with /administrator after it to access Joomla and other sites
-
try the site address or blog with /wp-admin/ after it to access Joomla sites
-
make up a webpage and try accessing it to view the site's 404 page
-
right-click on a page and choose View Page Source. Often you will see the name of the CMS clearly listed. Other times you will see clear clues such as /wp/ in folder names. Other times you will find unique extensions such as Yoast SEO which will give you an idea of the CMS
Once a bad guy knows which CMS is in use, they know the default folder structure and more. The point is it requires a lot more effort then most people realize to hide the CMS in use. I applaud your effort, but be very thorough about it. There is a lot more involved then simply covering your robots.txt file.
-
-
I found three options for you: http://www.techiecorner.com/106/how-to-disable-directory-browsing-using-htaccess-apache-web-server/
I think if you do it with.htacces that is a folder specific file than nobody will be able to detect where admin contet is located.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Disallow wildcard match in Robots.txt
This is in my robots.txt file, does anyone know what this is supposed to accomplish, it doesn't appear to be blocking URLs with question marks Disallow: /?crawler=1
Technical SEO | | AmandaBridge
Disallow: /?mobile=1 Thank you0 -
Robots.txt crawling URL's we dont want it to
Hello We run a number of websites and underneath them we have testing websites (sub-domains), on those sites we have robots.txt disallowing everything. When I logged into MOZ this morning I could see the MOZ spider had crawled our test sites even though we have said not to. Does anyone have an ideas how we can stop this happening?
Technical SEO | | ShearingsGroup0 -
CMS on autopilot is happily creating duplicate pages - advice?
Hi, our ecommerce CMS (Magento) is creating a bunch of pages with very little content and no user value like this: http://goo.gl/UU2vl This particular example is the by product of a product filtering page, which has the format www.mywebsite/explore/index/loaddata/id/10/. These pages have no content other than images - also the pages don't have page titles and are therefore being flagged in webmaster tools as requiring HTML improvements We also have CMS auto generated pages like this: www.mysite.comhttp/review/product/list/id/7 where the page is effectively a duplicate of the product page, and this is giving us pages being flagged by webmastertools as having duplicate title tags. Should we exclude these two type of page via robots.txt or take another approach, like not worry about them 🙂 many thanks, any help gratefully received.
Technical SEO | | w1ll1am0 -
Meta-robots Nofollow
I don't understand Meta-robots Nofollow. Wordpress has my homepage set to this according to SEOMoz tool. Is this really bad?
Technical SEO | | hopkinspat1 -
Google Plus Places Error
We have a large amount of clients and when we are updating their Google Plus Places listing, the ad is still presented as active, however it is not live on Google and when you click to view the listing we get this message 'We currently do not support this location' I have researched this and found many people are having this issue, but no solutions as of yet. Can anyone shed some light on to this because some of our clients are not thrilled at the moment. Thanks Jon
Technical SEO | | Jon_bangonline0 -
Any value in external links to image files?
Let's say you have www.example.com. On this website, you have www.example.com/example-image.jpg. When someone links externally to this image - like below... { is < {a href="www.example.com/example-image.jpg"} {img src="www.example.com/example-image.jpg"} {/a} The external site would be using the image hosted on your site, but the image is also linked back to the same image file on your site. Does this have any value even though the link is back to the image file and not the website? Also - how much value do you guys feel image links have in relation to tech links? In terms of passing link juice and adding to a natural link profile. Thanks!
Technical SEO | | qlkasdjfw1 -
Robots.txt
My campaign hse24 (www.hse24.de) is not being crawled any more ... Do you think this can be a problem of the robots.txt? I always thought that Google and friends are interpretating the file correct, seen that he site was crawled since last week. Thanks a lot Bernd NB: Here is the robots.txt: User-Agent: * Disallow: / User-agent: Googlebot User-agent: Googlebot-Image User-agent: Googlebot-Mobile User-agent: MSNBot User-agent: Slurp User-agent: yahoo-mmcrawler User-agent: psbot Disallow: /is-bin/ Allow: /is-bin/INTERSHOP.enfinity/WFS/HSE24-DE-Site/de_DE/-/EUR/hse24_Storefront-Start Allow: /is-bin/INTERSHOP.enfinity/WFS/HSE24-AT-Site/de_DE/-/EUR/hse24_Storefront-Start Allow: /is-bin/INTERSHOP.enfinity/WFS/HSE24-CH-Site/de_DE/-/CHF/hse24_Storefront-Start Allow: /is-bin/INTERSHOP.enfinity/WFS/HSE24-DE-Site/de_DE/-/EUR/hse24_DisplayProductInformation-Start Allow: /is-bin/INTERSHOP.enfinity/WFS/HSE24-AT-Site/de_DE/-/EUR/hse24_DisplayProductInformation-Start Allow: /is-bin/INTERSHOP.enfinity/WFS/HSE24-CH-Site/de_DE/-/CHF/hse24_DisplayProductInformation-Start Allow: /is-bin/intershop.static/WFS/HSE24-Site/-/Editions/ Allow: /is-bin/intershop.static/WFS/HSE24-Site/-/Editions/Root%20Edition/units/HSE24/Beratung/
Technical SEO | | remino630 -
Subdomain Robots.txt
If I have a subdomain (a blog) that is having tags and categories indexed when they should not be, because they are creating duplicate content. Can I block them using a robots.txt file? Can I/do I need to have a separate robots file for my subdomain? If so, how would I format it? Do I need to specify that it is a subdomain robots file, or will the search engines automatically pick this up? Thanks!
Technical SEO | | JohnECF0