Wednesday, November 28, 2012

Security Testing: Why it’s important things don’t work when they shouldn’t


Let’s say you’re developing an application and you reach the conclusion that you need to secure communications between your client and your server. You decide to use SSL/TLS using a good cipher-suite and you implement this using some standard library.

To test your implementation you set up a server with an SSL certificate and see that the client application communicates correctly with the server. Great – so you've tested your SSL implementation, right?

Wrong! It’s possible that your client application isn't actually using SSL at all and is communicating with the server without encryption. So to make sure that the communications are actually encrypted, you set up a packet analyzer (e.g. Wireshark) and see that indeed everything is encrypted. Now you’re finished, right?

Wrong! Your client application might use SSL when it talks to a server that supports SSL, but will be willing to work with a non-supporting server without SSL. Since the only purpose of SSL is to secure communications against hackers, this would make SSL worthless—hackers can perform a Man-in-the-Middle attack by setting up a fake server that will communicate with the client in the clear and communicate with the server with SSL. So to test this, you remove the certificate from your server and check that the client stops communicating with the server. Great – now we’re done?

Not even close! The anchor of SSL security is the validation of the server’s certificate. Perhaps the client application is willing to accept a badly signed or expired certificate? So you set up a server with a badly signed certificate, and another with an expired certificate, and check that the client doesn't communicate with either server. Finished?

Not yet! Maybe your application is willing to accept someone else’s correctly signed certificate? Or maybe the application will accept a certificate issued by a non-trusted certificate authority?

To test the implementation of a security feature, such as SSL, it is not enough to test that it works when it should—it’s critical to test that it doesn't work when it shouldn't  Otherwise, you haven’t actually tested the feature, because part of the feature’s functionality is that it should not to work in an insecure situation. Defining such “negative” tests—meaning checking that the system stops working when it’s not secured—is much more difficult than defining ordinary functional testing.

This may sound obvious but in reality most implementers of security features simply do not perform such security testing, or at least not enough of it. Researchers at the University of Texas recently published a study (PDF) analyzing dozens of SSL implementations and found that many fail to properly validate the SSL certificates and are thus entirely insecure. This included major applications such as Amazon, PayPal, and Chase Bank, and platforms such as Apache Axis. Another recent study (PDF) found that 8% of sampled Android apps that use SSL fail to properly validate certificates.

These failures would not have occurred if proper security testing had been done.  Such security testing isn't pentesting - it’s part of the basic functional testing of security features.

Sunday, November 25, 2012

How the PS3 LV0 key was (probably) hacked

If you follow Schneier you may have followed his link to a Hexus.net article entitled "Sony lets slip its PlayStation 3 Master Key", which tells us:
This wouldn't be the first time Sony has leaked important security keys, common to every PlayStation 3 console, however, this is the first time the console's LV0 decryption keys have been let loose in the wild.
So what makes the LV0 keys so special? These are the core security keys of the console, used to decrypt new firmware updates. With these keys in-hand, makers of custom firmwares (CFW) can easily decrypt any future firmware updates released by Sony, remaining a step ahead of the update game; likewise, modifying firmwares and preventing them from talking back to Sony HQ also becomes a much easier task.
Some background. PS3 utilizes a "chain of trust" to ensure that only trusted code runs on the PS3. The chain of trust starts from ROM code which is immutable. This ROM code validates the authenticity of a bootloader which in turn validates the authenticity of the LV0 code which in turn validates the next link in the chain.

A couple of years ago, due to an epic fail by Sony, hackers figured out the root secret key used by Sony to sign code and thus completely shattered the chain of trust.

But Sony still had a trick up their sleeves. The LV0 code isn't only signed - it's also encrypted with an on-chip key "the LV0 decryption key". This allows Sony to constantly download new software to devices which the attackers would need to break without being able to read the software (since it's encrypted). Now a group of hackers have published these LV0 decryption keys.

So how did the hackers get their hands on the LV0 decryption keys? According to the Hexus.net article:
So where has Sony gone wrong and what can the firm do to resolve the issue? Perhaps the most obvious mistake was to allow keys to leak in the first place, which were extracted through a flaw in the console's hypervisor.
"A flaw in the console's hypervisor"? What flaw? And how can such a flaw leak the LV0 keys when they should be long gone by the time the attackers can load their own code?

I suspect that the Hexus.net article is confusing the new LV0 key hack with the original GeoHot attack which indeed circumvented the PS3 hypervisor to gain access to LV0 and LV1 (not the LV0 decryption keys - the decrypted LV0 code itself). This is described in more detail here.

So how did the hackers obtain the LV0 decryption key? Ace hacker Wololo described a possible method on his blog:
For the exploit that we knew about, it would’ve required hardware assistance to repeatedly reboot the PS3 and some kind of flash emulator to set up the exploit with varying parameters each boot, and it probably would’ve taken several hours or days of automated attempts to hit the right combination (basically the exploit would work by executing random garbage as code, and hoping that it jumps to somewhere within a segment that we control – the probabilities are high enough that it would work out within a reasonable timeframe).
This makes a lot of sense. Since the LV0 signing keys were hacked long ago hackers can sign any file as the LV0 code and it will be run by the device. But since they didn't know what the LV0 decryption key is, this file will be decrypted with the LV0 key, becoming a random blob of commands, before being run by the device. Effectively this causes random code to be run on the device.

If you try enough random blobs sooner or later you're going to have one in which the code does something which is good for you as an attacker. In this case, you're going to get to a version of random code in which the code jumps to not encrypted code from some other place in memory with full access to the device memory - including to the LV0 decryption keys themselves.

This is exactly why it isn't enough to encrypt a file - you also need to sign it. Otherwise attackers can feed you random garbage that they can't control - but may be advantageous for their needs.

This is just another example to the old rule: When you have to sign - sign. Don't encrypt.



Of course in this case Sony did sign - they just signed badly and thus lost the signing keys.

Tuesday, November 20, 2012

Some thoughts on two-factor authentication

As defined by Wikipedia:
Two-factor authentication is an approach to authentication which requires the presentation of two or more of the three authentication factors: a knowledge factor ("something the user knows"), a possession factor ("something the user has"), and an inherence factor ("something the user is").
An example of two-factor authentication is the option in Google to require a key delivered via SMS to the user's phone, in addition to the account password, in order to log in to your Google account. In this case the password is "something the user knows" and the phone is "something the user has".

The security industry considers two-factor authentication to be much more secure than a single factor authentication. But when you think about it you've got to ask: why? If the effort required by a hacker to break one authentication method is X and the effort required to break the second is Y, then the effort required to break both will be X+Y, which is at most double the effort required to break the stronger of the two authentication methods. In the world of security a system is considered significantly more secure only if it adds an order of magnitude to the required effort to break - double effort isn't really significant  So why is two-factor authentication considered much stronger than single factor authentication?

The answer lies in the fact that most hacking is opportunistic and not specific. In most cases the hackers don't develop an attack vector to break a specific system - they utilize existing, previously developed, attack vectors and adapt them to the specific system.

Therefore when asking how secure a system is the question isn't how much effort would it take to develop attack vectors to break a system, but what are the chances that such vectors have been developed. In the case of two-factor authentication, if the chances that an attack vector exists against the first factor is 1 in X and for the second factor it's 1 in Y then (assuming the chances are unrelated) the chances that vectors exist to break both are 1 in X*Y. The security of a system protected via two-factor authentication is thus the product of each of the security provided by each of the two factors and not the sum.

But this is not always the case.

If there is a relation between the chance of an attack vector breaking one authentication factor and the chance of a vector breaking the other then two factor authentication no longer multiplies the level of security. Sometimes a single attack vector will cover both authentication factors, in which case for that specific vector having two authentication factors don't really add anything. For example, if the two factors are a password and the device you're using then someone running malware on your device can overcome both factors.

Due to the above for two-factor authentication to be efficient the two factors must be as technologically distinct as possible. The hacking techniques used to overcome each of the factors must not coincide and should be from distinct realms of expertise.

Likewise two-factor authentication isn't really strong against well funded Advanced Persistent Threat (APT) attacks. Such attacks aren't opportunistic - they are targeted against a specific system and will do whatever it takes to compromise that system. For such attacks two-factor authentication does increase the effort required for the attacker to break the system, but at most by a factor of two.