Tuesday, August 16, 2011

The seven laws of security engineering

There are a few laws in the field of security engineering that impact many aspects of the discipline. Most of these laws are self evident and well known, but the application of these laws to real world situations is difficult. In fact most security failures in the field can be traced to one or more of these laws.
Following is a list of seven such laws with a short description of each law. Future posts will elaborate on these laws (and others) as part of an analysis of specific cases.
You might ask a security engineer if a certain system is secure. If they give you an answer which sounds evasive and noncommittal that’s good – otherwise they’re not telling you the whole truth.
Because the truth is that no system is 100% secure in and of itself. The most a security engineer can say is that under certain assumptions the system is secure.
Dilbert.com

Every security system is based on assumptions regarding what an attacker can and cannot do.  Common assumptions include that the attacker can’t gain access to certain data in the system (e.g. secret keys in a server) and that the attacker can’t modify the flow of code running on certain system entities. Likewise security systems make certain cryptographic assumptions which can’t be proven (e.g. that it is not feasible for an attacker to calculate a preimage of a hash).

A good security engineer clarifies these assumptions in the security design of the system, takes steps to ensure that they actually hold for the system and builds in recovery measures in case the assumptions don't hold. Most failures in security systems can be traced to a failure to recognize the assumptions at the root of the system or to take steps to strengthen those assumptions.

Two corollaries of this law are: “Security is in the details” (Google count: 319,000) and “A system is only as secure as its implementation” (Google count: 1,510). These are consequences of this law because one assumption all security systems make is that that the details of the implementation are correct and robust.
 
In law #1 I wrote that “every security system is based on assumptions regarding what an attacker can or cannot do”. This is not accurate. We don’t really need to assume that the attacker cannot do something – it’s enough that we can assume that the attacker will not do it.

The difference between these statements is the attacker’s motivation - and it’s a central issue that needs to be taken into account when designing a security system. When considering what a potential attacker might do we need to take into account who may be motivated to attack the system, what capabilities such an attacker has and how motivated they are to perform the attack.

The attacker’s motivation is not necessarily a given; there are actions that the defender can do that will increase or decrease the attacker’s motivation. For example, if the defender issues a challenge to break into his system – that’s a great incentive for hackers to do so. On the other hand, if the defender builds the system in such a way that limits the benefits an attacker could get by performing an attack that could act as a deterrent.

In a perfect world (at least from the viewpoint of a security engineer) we might not want to make assumptions about the hacker’s motivation and always build systems that can’t be hacked by anyone. But in the real world increasing the security entails costs that need to be justified. Which brings us to: 
Security measures invariably have a cost. Sometimes it’s a simple price tag, such as $X for a firewall. Often it’s an operational cost. For example, to secure a system it may be necessary to limit access to certain resources. Doing so requires someone to manage the access rules for these resources and may make it more difficult for people without such access to do their job.

Operational costs are no less real than dollar costs. In fact, in many systems it is more difficult to get decision makers to pay such operational costs than to spend a few more bucks. How much someone is willing to invest in securing a system depends mainly on how motivated they are to do so, which brings us to law #4.
How motivated is the defender? In a ideal rational world this would depend mainly on the amount of damage the defender would incur from an attack. In the real world it more a question of perceived damage, which may be more strongly related to external factors like media hype or the defender’s past experience than to the actual damages the defender would incur from an attack.

Another issue that exists here is who the defender is. In most situations there are multiple parties responsible to secure the system. For example this could include the company which owns the system, the responsible manager at that company, the customers of that company and a security vendor that sells security products to that company. Each one of these parties has different motivations in securing the system and there are bound to be conflicts of interests between the parties. What constitutes a threat in the eyes of one may not be considered a threat by another.
Google gives 282,000 results for the phrase “good enough security” – and this isn’t surprising. Though this term is usually used derogatorily, it’s actually very sensible. Since (law #1) there is no such thing as perfect security and since (law #3) security comes at a cost security engineers must find the golden mean of “good enough security” – a security solution that, considering the motivation and capabilities of the attacker (law #2), is likely to prevent the kind of attack that the defender is motivated enough (law #4) to pay the cost to defend.

As in any balancing act, this is not an easy task to achieve. Attempting to make this balance is what makes every security engineer a gambler. To reduce costs now you need to increase the risk of a substantial security breach in the future – are you willing to take that bet?

When people use the term "good enough security" derogatorily they don't really mean that the solution is good enough to secure the system. What they mean is that the system doesn't need to be properly secured. In other words, the defender's motivation isn't strong enough to pay the cost of properly securing the system. Many times such "good enough security" is enough to protect the system today but will cease to be good enough in the future because:
Attacker motivations and capabilities change with time and circumstances. A system which is secure enough to protect against a disgruntled customer may not be secure enough to protect against the Chinese government. A 56-bit DES key, which was once considered quite robust, can now be quite easily cracked.

So even if a security solution is “good enough” today with the given circumstances it may not be good enough tomorrow with other circumstances. The problem is that in most cases the security engineer is no longer involved in the process and there’s no one available to periodically reevaluate if the security solution is appropriate to the system given the changes both in the system itself and in the environment. Many good security systems fail due to subsequent changes in usage or the environment which were not considered when the system was originally designed.

Interestingly this can go the other way. In a classic post from 2009 Bruce Schneier noted a situation in which the emergence of a less capable attacker justified the deployment of additional security measures that weren't part of the original security design:
During the Cold War, the NSA's primary adversary was Soviet intelligence, and it developed its crypto solutions accordingly. Even though that level of security makes no sense in Bosnia, and certainly not in Iraq and Afghanistan, it is what the NSA had to offer. If you encrypt, they said, you have to do it "right." The problem is, the world has changed. Today's insurgent adversaries don't have KGB-level intelligence gathering or cryptanalytic capabilities.  Defending against these sorts of adversaries doesn't require military-grade encryption only where it counts; it requires commercial-grade encryption everywhere possible.

Law #7: Security mechanisms need to be adopted by people
Most security mechanisms are dependent on human behavior; if people don't use the mechanism correctly it will not be effective. A password isn't effective if it's public knowledge. A key isn't effective if it's left in the door.
Dilbert.com
One of the more difficult tasks of security engineers is to build systems that people will use correctly.This can be done by minimizing human involvement/inconvenience or by motivating users towards correct usage.
There are no lack of technically superb security systems that are worthless because they are dependent on people following rules without appropriate incentives. For example, see "Why (Special Agent) Johnny (Still) Can’t Encrypt".


Note: Feel free to use the comments to suggest additional laws of this type. The list is not intended to be closed and will be updated over time.

No comments:

Post a Comment