What makes a system insecure?

Switching it on. The adage usually quoted runs along these lines:

"The only system which is truly secure is one which is switched off and unplugged, locked in a titanium lined safe, buried in a concrete bunker, and is surrounded by nerve gas and very highly paid armed guards. Even then, I wouldn't stake my life on it."

(the original version of this is attributed to Gene Spafford)
A system is only as secure as the people who can get at it. It can be "totally" secure without any protection at all, so long as its continued good operation is important to everyone who can get at it, assuming all those people are responsible, and regular backups are made in case of hardware problems. Many laboratory PC's quite merrily tick away the hours like this.

The problems arise when a need (such as confidentiality) has to be fulfilled. Once you start putting the locks on a system, it is fairly likely that you will never stop.

Security holes manifest themselves in (broadly) four ways:

  1. Physical Security Holes.
  2. Software Security Holes New holes like this appear all the time, and your best hopes are to:

    1. try to structure your system so that as little software as possible runs with root/daemon/bin privileges, and that which does is known to be robust.
    2. subscribe to a mailing list which can get details of problems and/or fixes out to you as quickly as possible, and then ACT when you receive information.

  3. Incompatible Usage Security Holes

    Where, through lack of experience, or no fault of his/her own, the System Manager assembles a combination of hardware and software which when used as a system is seriously flawed from a security point of view. It is the incompatibility of trying to do two unconnected but useful things which creates the security hole.

    Problems like this are a pain to find once a system is set up and running, so it is better to build your system with them in mind. It's never too late to have a rethink, though.

    Some examples are detailed below; let's not go into them here, it would only spoil the surprise.

  4. Choosing a suitable security philosophy and maintaining it.

    The fourth kind of security problem is one of perception and understanding. Perfect software, protected hardware, and compatible components don't work unless you have selected an appropriate security policy and turned on the parts of your system that enforce it. Having the best password mechanism in the world is worthless if your users think that their login name backwards is a good password! Security is relative to a policy (or set of policies) and the operation of a system in conformance with that policy.