Monday, September 16, 2013

Moved to Twitter and Pulse

After a hiatus I'm back to blogging on Product Security (and Innovation) on Pulse here.

For shorter takes on security news, follow me on Twitter at @dwfogel.

Thursday, December 20, 2012

Protecting the weak (passwords)

From @marshway:
Salting a fast hash function is like issuing life vests to the passengers of the Titanic.
This is an apt simile. I guess it requires some explanation for people less knowledgeable in the area of password security.

If a server uses passwords to authenticate users the server needs to store some value for each user with which to validate the password entered by the user.

Servers can be hacked - in which case this database of password values could fall into the wrong hands. Due to this servers don't store the password itself - they store a one-way cryptographic hash function of the password. So a hacker who obtains a password hash from the database can't figure out the actual password.

In theory that should be enough. The problem is that most actual passwords aren't random strings - they are chosen by humans, usually in a way that is easy to remember. Because of this crackers can (and have) produced relatively small dictionaries (with maybe a few million passwords) that contain the great majority of actual passwords used by people.

The crackers can also create a dictionary of hashed passwords of these most common passwords hashed with popular hash algorithms. So if a cracker gets a hashed password from a server they can simply look it up in this dictionary and find the password itself, thus overcoming the hash.

To prevent this servers "salt" the hashed passwords. Instead of a simply hashing each password with a single hash algorithm, which the crackers can then create a dictionary of hashed password for, the server uses a different hash function per password  This is done by hashing a few random bytes, called "salt", together with the password. This makes it impossible to prepare a single dictionary of hashed passwords that could be used to reveal many passwords in the server's database.

Sounds great. So why does @marshway say that "salting a fast hash function is like issuing life vests to the passengers of the Titanic"? Because these days computers are so fast that crackers can crack password hashes without creating a dictionary of hashed passwords. Instead they perform a "brute force" attack. For each hashed password in the database they take each password in the dictionary of most common passwords, hash it together with its "salt" and compare it to the hashed password in the database. When they find a match - they've found the password.

To prevent this kind of attack robust systems use a slow hash function such as PBKDF2, bcrypt or scrypt. These are hash functions that take such a long time to run that if the cracker needs to repeat the operation for each password in his dictionary it will make the attack so slow as to be infeasible. This technique is also known as key stretching.

Salting and key stretching are complementary solutions. Salting prevents crackers from producing a single dictionary of hashed password to crack all the passwords in a database. Key stretching makes it more difficult to crack specific hashed passwords.

So let's go back to @marshway's Titanic simile. If you use a very weak password (say one of the 100 most popular passwords - for example "password") then the fact that the server does salting and key stretching won't help you much. The cracker can go through all the hashed passwords in the database, hash each one of the top 50 passwords with the salt and compare to the hashed password. You're the Titanic passenger who was so weak you got a heart attack just from seeing the iceberg.

To the other extreme, if you use a very strong password (say a completely random 30 character string) you're safe even if the server doesn't use neither salt nor key stretching  As long as the passwords are hashed, no one is going to build a dictionary that will include your password. You're like Johnny Weissmuller on the Titanic - you can swim ashore without a life vest or a life boat.

If you use a fairly strong password, say a password that isn't on the top ten million passwords list but is on the top billion list, then salt will probably be enough to protect you, since the cracker is unlikely to try hashing a billion possible passwords with the hash to compare to your hashed password. Unless the cracker is specifically targeting you, in which case you're in deep trouble. This is like the extremely healthy Titanic passenger who could survive in the freezing water for hours as long as a life vest would keep them afloat.

But the great majority of people are in a fourth group - they use passwords that are in the top 10 million list but not the top 10 thousand list. For these people salt isn't enough - if the server uses a fast hashing function, the crackers can simply try each one of these most popular passwords with each of the salts in the database. These are the majority of the Titanic passengers who couldn't survive in the freezing water and needed a life boat - or key stretching.

The extremely strong don't need protection - the extremely weak can't be saved. Security systems, like boats, are designed to protect the great majority of the people who are somewhere in the middle.

Wednesday, November 28, 2012

Security Testing: Why it’s important things don’t work when they shouldn’t

Let’s say you’re developing an application and you reach the conclusion that you need to secure communications between your client and your server. You decide to use SSL/TLS using a good cipher-suite and you implement this using some standard library.

To test your implementation you set up a server with an SSL certificate and see that the client application communicates correctly with the server. Great – so you've tested your SSL implementation, right?

Wrong! It’s possible that your client application isn't actually using SSL at all and is communicating with the server without encryption. So to make sure that the communications are actually encrypted, you set up a packet analyzer (e.g. Wireshark) and see that indeed everything is encrypted. Now you’re finished, right?

Wrong! Your client application might use SSL when it talks to a server that supports SSL, but will be willing to work with a non-supporting server without SSL. Since the only purpose of SSL is to secure communications against hackers, this would make SSL worthless—hackers can perform a Man-in-the-Middle attack by setting up a fake server that will communicate with the client in the clear and communicate with the server with SSL. So to test this, you remove the certificate from your server and check that the client stops communicating with the server. Great – now we’re done?

Not even close! The anchor of SSL security is the validation of the server’s certificate. Perhaps the client application is willing to accept a badly signed or expired certificate? So you set up a server with a badly signed certificate, and another with an expired certificate, and check that the client doesn't communicate with either server. Finished?

Not yet! Maybe your application is willing to accept someone else’s correctly signed certificate? Or maybe the application will accept a certificate issued by a non-trusted certificate authority?

To test the implementation of a security feature, such as SSL, it is not enough to test that it works when it should—it’s critical to test that it doesn't work when it shouldn't  Otherwise, you haven’t actually tested the feature, because part of the feature’s functionality is that it should not to work in an insecure situation. Defining such “negative” tests—meaning checking that the system stops working when it’s not secured—is much more difficult than defining ordinary functional testing.

This may sound obvious but in reality most implementers of security features simply do not perform such security testing, or at least not enough of it. Researchers at the University of Texas recently published a study (PDF) analyzing dozens of SSL implementations and found that many fail to properly validate the SSL certificates and are thus entirely insecure. This included major applications such as Amazon, PayPal, and Chase Bank, and platforms such as Apache Axis. Another recent study (PDF) found that 8% of sampled Android apps that use SSL fail to properly validate certificates.

These failures would not have occurred if proper security testing had been done.  Such security testing isn't pentesting - it’s part of the basic functional testing of security features.

Sunday, November 25, 2012

How the PS3 LV0 key was (probably) hacked

If you follow Schneier you may have followed his link to a article entitled "Sony lets slip its PlayStation 3 Master Key", which tells us:
This wouldn't be the first time Sony has leaked important security keys, common to every PlayStation 3 console, however, this is the first time the console's LV0 decryption keys have been let loose in the wild.
So what makes the LV0 keys so special? These are the core security keys of the console, used to decrypt new firmware updates. With these keys in-hand, makers of custom firmwares (CFW) can easily decrypt any future firmware updates released by Sony, remaining a step ahead of the update game; likewise, modifying firmwares and preventing them from talking back to Sony HQ also becomes a much easier task.
Some background. PS3 utilizes a "chain of trust" to ensure that only trusted code runs on the PS3. The chain of trust starts from ROM code which is immutable. This ROM code validates the authenticity of a bootloader which in turn validates the authenticity of the LV0 code which in turn validates the next link in the chain.

A couple of years ago, due to an epic fail by Sony, hackers figured out the root secret key used by Sony to sign code and thus completely shattered the chain of trust.

But Sony still had a trick up their sleeves. The LV0 code isn't only signed - it's also encrypted with an on-chip key "the LV0 decryption key". This allows Sony to constantly download new software to devices which the attackers would need to break without being able to read the software (since it's encrypted). Now a group of hackers have published these LV0 decryption keys.

So how did the hackers get their hands on the LV0 decryption keys? According to the article:
So where has Sony gone wrong and what can the firm do to resolve the issue? Perhaps the most obvious mistake was to allow keys to leak in the first place, which were extracted through a flaw in the console's hypervisor.
"A flaw in the console's hypervisor"? What flaw? And how can such a flaw leak the LV0 keys when they should be long gone by the time the attackers can load their own code?

I suspect that the article is confusing the new LV0 key hack with the original GeoHot attack which indeed circumvented the PS3 hypervisor to gain access to LV0 and LV1 (not the LV0 decryption keys - the decrypted LV0 code itself). This is described in more detail here.

So how did the hackers obtain the LV0 decryption key? Ace hacker Wololo described a possible method on his blog:
For the exploit that we knew about, it would’ve required hardware assistance to repeatedly reboot the PS3 and some kind of flash emulator to set up the exploit with varying parameters each boot, and it probably would’ve taken several hours or days of automated attempts to hit the right combination (basically the exploit would work by executing random garbage as code, and hoping that it jumps to somewhere within a segment that we control – the probabilities are high enough that it would work out within a reasonable timeframe).
This makes a lot of sense. Since the LV0 signing keys were hacked long ago hackers can sign any file as the LV0 code and it will be run by the device. But since they didn't know what the LV0 decryption key is, this file will be decrypted with the LV0 key, becoming a random blob of commands, before being run by the device. Effectively this causes random code to be run on the device.

If you try enough random blobs sooner or later you're going to have one in which the code does something which is good for you as an attacker. In this case, you're going to get to a version of random code in which the code jumps to not encrypted code from some other place in memory with full access to the device memory - including to the LV0 decryption keys themselves.

This is exactly why it isn't enough to encrypt a file - you also need to sign it. Otherwise attackers can feed you random garbage that they can't control - but may be advantageous for their needs.

This is just another example to the old rule: When you have to sign - sign. Don't encrypt.

Of course in this case Sony did sign - they just signed badly and thus lost the signing keys.

Tuesday, November 20, 2012

Some thoughts on two-factor authentication

As defined by Wikipedia:
Two-factor authentication is an approach to authentication which requires the presentation of two or more of the three authentication factors: a knowledge factor ("something the user knows"), a possession factor ("something the user has"), and an inherence factor ("something the user is").
An example of two-factor authentication is the option in Google to require a key delivered via SMS to the user's phone, in addition to the account password, in order to log in to your Google account. In this case the password is "something the user knows" and the phone is "something the user has".

The security industry considers two-factor authentication to be much more secure than a single factor authentication. But when you think about it you've got to ask: why? If the effort required by a hacker to break one authentication method is X and the effort required to break the second is Y, then the effort required to break both will be X+Y, which is at most double the effort required to break the stronger of the two authentication methods. In the world of security a system is considered significantly more secure only if it adds an order of magnitude to the required effort to break - double effort isn't really significant  So why is two-factor authentication considered much stronger than single factor authentication?

The answer lies in the fact that most hacking is opportunistic and not specific. In most cases the hackers don't develop an attack vector to break a specific system - they utilize existing, previously developed, attack vectors and adapt them to the specific system.

Therefore when asking how secure a system is the question isn't how much effort would it take to develop attack vectors to break a system, but what are the chances that such vectors have been developed. In the case of two-factor authentication, if the chances that an attack vector exists against the first factor is 1 in X and for the second factor it's 1 in Y then (assuming the chances are unrelated) the chances that vectors exist to break both are 1 in X*Y. The security of a system protected via two-factor authentication is thus the product of each of the security provided by each of the two factors and not the sum.

But this is not always the case.

If there is a relation between the chance of an attack vector breaking one authentication factor and the chance of a vector breaking the other then two factor authentication no longer multiplies the level of security. Sometimes a single attack vector will cover both authentication factors, in which case for that specific vector having two authentication factors don't really add anything. For example, if the two factors are a password and the device you're using then someone running malware on your device can overcome both factors.

Due to the above for two-factor authentication to be efficient the two factors must be as technologically distinct as possible. The hacking techniques used to overcome each of the factors must not coincide and should be from distinct realms of expertise.

Likewise two-factor authentication isn't really strong against well funded Advanced Persistent Threat (APT) attacks. Such attacks aren't opportunistic - they are targeted against a specific system and will do whatever it takes to compromise that system. For such attacks two-factor authentication does increase the effort required for the attacker to break the system, but at most by a factor of two.

Monday, October 22, 2012

Where I've been for the last two months

It's been two months since my last post. During this time I've been very active on the Stack Exchange IT Security site, under the equinym "David Wachtfogel" (an equinym is a pseudonym which is equal to the real name).

Stack Exchange is a moderated Q&A site. There are several advantages to posting answers on Stack Exchange compared to posting blogs on my own blog site, including:
  • Wider distribution. There are more people in the Stack Exchange audience than there are in my blog's.
  • Peer review: There are some great people active on Stack Exchange who will review your work and give good feedback.
  • Topics: Answering other peoples' question means I don't need to come up with topics for posts.
Another advantage is that by following the Stack Exchange site I learn a lot from others on the site on subjects that I wouldn't have thought of looking up otherwise.

On the other hand Stack Exchange does cramp my style a little. It's a serious site so there's less room for humor. To get recognized it's usually important to respond quickly, which leaves less time to polish the post. So I do plan to continue posting to this blog when appropriate - and I'm currently working on my next post.

Following are links to some of my contributions to the Stack Exchange IT Security site. Read them, but don't stop there - there is a lot of great material on the site posted by people like Thomas Pornin, Polynomial and D.W.

I also found an interesting bug in MDK3.

Monday, August 6, 2012

Security QotD #2

From a 2008 Wired article by Bruce Schneier:
Quantum cryptography is unbelievably cool, in theory, and nearly useless in real life.
It takes guts to be the first to pronounce the emperor is naked.