Thursday, December 20, 2012

Protecting the weak (passwords)

From @marshway:
Salting a fast hash function is like issuing life vests to the passengers of the Titanic.
This is an apt simile. I guess it requires some explanation for people less knowledgeable in the area of password security.

If a server uses passwords to authenticate users the server needs to store some value for each user with which to validate the password entered by the user.

Servers can be hacked - in which case this database of password values could fall into the wrong hands. Due to this servers don't store the password itself - they store a one-way cryptographic hash function of the password. So a hacker who obtains a password hash from the database can't figure out the actual password.

In theory that should be enough. The problem is that most actual passwords aren't random strings - they are chosen by humans, usually in a way that is easy to remember. Because of this crackers can (and have) produced relatively small dictionaries (with maybe a few million passwords) that contain the great majority of actual passwords used by people.

The crackers can also create a dictionary of hashed passwords of these most common passwords hashed with popular hash algorithms. So if a cracker gets a hashed password from a server they can simply look it up in this dictionary and find the password itself, thus overcoming the hash.

To prevent this servers "salt" the hashed passwords. Instead of a simply hashing each password with a single hash algorithm, which the crackers can then create a dictionary of hashed password for, the server uses a different hash function per password  This is done by hashing a few random bytes, called "salt", together with the password. This makes it impossible to prepare a single dictionary of hashed passwords that could be used to reveal many passwords in the server's database.

Sounds great. So why does @marshway say that "salting a fast hash function is like issuing life vests to the passengers of the Titanic"? Because these days computers are so fast that crackers can crack password hashes without creating a dictionary of hashed passwords. Instead they perform a "brute force" attack. For each hashed password in the database they take each password in the dictionary of most common passwords, hash it together with its "salt" and compare it to the hashed password in the database. When they find a match - they've found the password.

To prevent this kind of attack robust systems use a slow hash function such as PBKDF2, bcrypt or scrypt. These are hash functions that take such a long time to run that if the cracker needs to repeat the operation for each password in his dictionary it will make the attack so slow as to be infeasible. This technique is also known as key stretching.

Salting and key stretching are complementary solutions. Salting prevents crackers from producing a single dictionary of hashed password to crack all the passwords in a database. Key stretching makes it more difficult to crack specific hashed passwords.

So let's go back to @marshway's Titanic simile. If you use a very weak password (say one of the 100 most popular passwords - for example "password") then the fact that the server does salting and key stretching won't help you much. The cracker can go through all the hashed passwords in the database, hash each one of the top 50 passwords with the salt and compare to the hashed password. You're the Titanic passenger who was so weak you got a heart attack just from seeing the iceberg.

To the other extreme, if you use a very strong password (say a completely random 30 character string) you're safe even if the server doesn't use neither salt nor key stretching  As long as the passwords are hashed, no one is going to build a dictionary that will include your password. You're like Johnny Weissmuller on the Titanic - you can swim ashore without a life vest or a life boat.

If you use a fairly strong password, say a password that isn't on the top ten million passwords list but is on the top billion list, then salt will probably be enough to protect you, since the cracker is unlikely to try hashing a billion possible passwords with the hash to compare to your hashed password. Unless the cracker is specifically targeting you, in which case you're in deep trouble. This is like the extremely healthy Titanic passenger who could survive in the freezing water for hours as long as a life vest would keep them afloat.

But the great majority of people are in a fourth group - they use passwords that are in the top 10 million list but not the top 10 thousand list. For these people salt isn't enough - if the server uses a fast hashing function, the crackers can simply try each one of these most popular passwords with each of the salts in the database. These are the majority of the Titanic passengers who couldn't survive in the freezing water and needed a life boat - or key stretching.

The extremely strong don't need protection - the extremely weak can't be saved. Security systems, like boats, are designed to protect the great majority of the people who are somewhere in the middle.

Wednesday, November 28, 2012

Security Testing: Why it’s important things don’t work when they shouldn’t


Let’s say you’re developing an application and you reach the conclusion that you need to secure communications between your client and your server. You decide to use SSL/TLS using a good cipher-suite and you implement this using some standard library.

To test your implementation you set up a server with an SSL certificate and see that the client application communicates correctly with the server. Great – so you've tested your SSL implementation, right?

Wrong! It’s possible that your client application isn't actually using SSL at all and is communicating with the server without encryption. So to make sure that the communications are actually encrypted, you set up a packet analyzer (e.g. Wireshark) and see that indeed everything is encrypted. Now you’re finished, right?

Wrong! Your client application might use SSL when it talks to a server that supports SSL, but will be willing to work with a non-supporting server without SSL. Since the only purpose of SSL is to secure communications against hackers, this would make SSL worthless—hackers can perform a Man-in-the-Middle attack by setting up a fake server that will communicate with the client in the clear and communicate with the server with SSL. So to test this, you remove the certificate from your server and check that the client stops communicating with the server. Great – now we’re done?

Not even close! The anchor of SSL security is the validation of the server’s certificate. Perhaps the client application is willing to accept a badly signed or expired certificate? So you set up a server with a badly signed certificate, and another with an expired certificate, and check that the client doesn't communicate with either server. Finished?

Not yet! Maybe your application is willing to accept someone else’s correctly signed certificate? Or maybe the application will accept a certificate issued by a non-trusted certificate authority?

To test the implementation of a security feature, such as SSL, it is not enough to test that it works when it should—it’s critical to test that it doesn't work when it shouldn't  Otherwise, you haven’t actually tested the feature, because part of the feature’s functionality is that it should not to work in an insecure situation. Defining such “negative” tests—meaning checking that the system stops working when it’s not secured—is much more difficult than defining ordinary functional testing.

This may sound obvious but in reality most implementers of security features simply do not perform such security testing, or at least not enough of it. Researchers at the University of Texas recently published a study (PDF) analyzing dozens of SSL implementations and found that many fail to properly validate the SSL certificates and are thus entirely insecure. This included major applications such as Amazon, PayPal, and Chase Bank, and platforms such as Apache Axis. Another recent study (PDF) found that 8% of sampled Android apps that use SSL fail to properly validate certificates.

These failures would not have occurred if proper security testing had been done.  Such security testing isn't pentesting - it’s part of the basic functional testing of security features.

Sunday, November 25, 2012

How the PS3 LV0 key was (probably) hacked

If you follow Schneier you may have followed his link to a Hexus.net article entitled "Sony lets slip its PlayStation 3 Master Key", which tells us:
This wouldn't be the first time Sony has leaked important security keys, common to every PlayStation 3 console, however, this is the first time the console's LV0 decryption keys have been let loose in the wild.
So what makes the LV0 keys so special? These are the core security keys of the console, used to decrypt new firmware updates. With these keys in-hand, makers of custom firmwares (CFW) can easily decrypt any future firmware updates released by Sony, remaining a step ahead of the update game; likewise, modifying firmwares and preventing them from talking back to Sony HQ also becomes a much easier task.
Some background. PS3 utilizes a "chain of trust" to ensure that only trusted code runs on the PS3. The chain of trust starts from ROM code which is immutable. This ROM code validates the authenticity of a bootloader which in turn validates the authenticity of the LV0 code which in turn validates the next link in the chain.

A couple of years ago, due to an epic fail by Sony, hackers figured out the root secret key used by Sony to sign code and thus completely shattered the chain of trust.

But Sony still had a trick up their sleeves. The LV0 code isn't only signed - it's also encrypted with an on-chip key "the LV0 decryption key". This allows Sony to constantly download new software to devices which the attackers would need to break without being able to read the software (since it's encrypted). Now a group of hackers have published these LV0 decryption keys.

So how did the hackers get their hands on the LV0 decryption keys? According to the Hexus.net article:
So where has Sony gone wrong and what can the firm do to resolve the issue? Perhaps the most obvious mistake was to allow keys to leak in the first place, which were extracted through a flaw in the console's hypervisor.
"A flaw in the console's hypervisor"? What flaw? And how can such a flaw leak the LV0 keys when they should be long gone by the time the attackers can load their own code?

I suspect that the Hexus.net article is confusing the new LV0 key hack with the original GeoHot attack which indeed circumvented the PS3 hypervisor to gain access to LV0 and LV1 (not the LV0 decryption keys - the decrypted LV0 code itself). This is described in more detail here.

So how did the hackers obtain the LV0 decryption key? Ace hacker Wololo described a possible method on his blog:
For the exploit that we knew about, it would’ve required hardware assistance to repeatedly reboot the PS3 and some kind of flash emulator to set up the exploit with varying parameters each boot, and it probably would’ve taken several hours or days of automated attempts to hit the right combination (basically the exploit would work by executing random garbage as code, and hoping that it jumps to somewhere within a segment that we control – the probabilities are high enough that it would work out within a reasonable timeframe).
This makes a lot of sense. Since the LV0 signing keys were hacked long ago hackers can sign any file as the LV0 code and it will be run by the device. But since they didn't know what the LV0 decryption key is, this file will be decrypted with the LV0 key, becoming a random blob of commands, before being run by the device. Effectively this causes random code to be run on the device.

If you try enough random blobs sooner or later you're going to have one in which the code does something which is good for you as an attacker. In this case, you're going to get to a version of random code in which the code jumps to not encrypted code from some other place in memory with full access to the device memory - including to the LV0 decryption keys themselves.

This is exactly why it isn't enough to encrypt a file - you also need to sign it. Otherwise attackers can feed you random garbage that they can't control - but may be advantageous for their needs.

This is just another example to the old rule: When you have to sign - sign. Don't encrypt.



Of course in this case Sony did sign - they just signed badly and thus lost the signing keys.

Tuesday, November 20, 2012

Some thoughts on two-factor authentication

As defined by Wikipedia:
Two-factor authentication is an approach to authentication which requires the presentation of two or more of the three authentication factors: a knowledge factor ("something the user knows"), a possession factor ("something the user has"), and an inherence factor ("something the user is").
An example of two-factor authentication is the option in Google to require a key delivered via SMS to the user's phone, in addition to the account password, in order to log in to your Google account. In this case the password is "something the user knows" and the phone is "something the user has".

The security industry considers two-factor authentication to be much more secure than a single factor authentication. But when you think about it you've got to ask: why? If the effort required by a hacker to break one authentication method is X and the effort required to break the second is Y, then the effort required to break both will be X+Y, which is at most double the effort required to break the stronger of the two authentication methods. In the world of security a system is considered significantly more secure only if it adds an order of magnitude to the required effort to break - double effort isn't really significant  So why is two-factor authentication considered much stronger than single factor authentication?

The answer lies in the fact that most hacking is opportunistic and not specific. In most cases the hackers don't develop an attack vector to break a specific system - they utilize existing, previously developed, attack vectors and adapt them to the specific system.

Therefore when asking how secure a system is the question isn't how much effort would it take to develop attack vectors to break a system, but what are the chances that such vectors have been developed. In the case of two-factor authentication, if the chances that an attack vector exists against the first factor is 1 in X and for the second factor it's 1 in Y then (assuming the chances are unrelated) the chances that vectors exist to break both are 1 in X*Y. The security of a system protected via two-factor authentication is thus the product of each of the security provided by each of the two factors and not the sum.

But this is not always the case.

If there is a relation between the chance of an attack vector breaking one authentication factor and the chance of a vector breaking the other then two factor authentication no longer multiplies the level of security. Sometimes a single attack vector will cover both authentication factors, in which case for that specific vector having two authentication factors don't really add anything. For example, if the two factors are a password and the device you're using then someone running malware on your device can overcome both factors.

Due to the above for two-factor authentication to be efficient the two factors must be as technologically distinct as possible. The hacking techniques used to overcome each of the factors must not coincide and should be from distinct realms of expertise.

Likewise two-factor authentication isn't really strong against well funded Advanced Persistent Threat (APT) attacks. Such attacks aren't opportunistic - they are targeted against a specific system and will do whatever it takes to compromise that system. For such attacks two-factor authentication does increase the effort required for the attacker to break the system, but at most by a factor of two.

Monday, October 22, 2012

Where I've been for the last two months

It's been two months since my last post. During this time I've been very active on the Stack Exchange IT Security site, under the equinym "David Wachtfogel" (an equinym is a pseudonym which is equal to the real name).

Stack Exchange is a moderated Q&A site. There are several advantages to posting answers on Stack Exchange compared to posting blogs on my own blog site, including:
  • Wider distribution. There are more people in the Stack Exchange audience than there are in my blog's.
  • Peer review: There are some great people active on Stack Exchange who will review your work and give good feedback.
  • Topics: Answering other peoples' question means I don't need to come up with topics for posts.
Another advantage is that by following the Stack Exchange site I learn a lot from others on the site on subjects that I wouldn't have thought of looking up otherwise.

On the other hand Stack Exchange does cramp my style a little. It's a serious site so there's less room for humor. To get recognized it's usually important to respond quickly, which leaves less time to polish the post. So I do plan to continue posting to this blog when appropriate - and I'm currently working on my next post.

Following are links to some of my contributions to the Stack Exchange IT Security site. Read them, but don't stop there - there is a lot of great material on the site posted by people like Thomas Pornin, Polynomial and D.W.


I also found an interesting bug in MDK3.

Monday, August 6, 2012

Security QotD #2

From a 2008 Wired article by Bruce Schneier:
Quantum cryptography is unbelievably cool, in theory, and nearly useless in real life.
It takes guts to be the first to pronounce the emperor is naked.

Human Denial of Service using One Time Spam

The Krebs on Security blog reports of a new service offered by hackers. The service floods an individual with garbage communications in an effort to prevent the victim from being able to receive and process valid communications. Such an attack is effectively a Denial-of-Service attack on humans - or an HDoS attack. One possible goal of such an attack is to prevent a person from receiving notification of some other action that the attacker has done, such as resetting the person's password, which in many systems causes an email to be sent to that person.
This new form of attack is interesting in many ways, but I'd like to focus on the usage of one-time spam as the mechanism used to flood the target's email account.

Contemporary spam filters are quite good in filtering out spam. Yet these hackers are able to get thousands of junk emails through the best spam filters. Krebs and other commentators wondered if these hackers have found some way to circumvent the spam filters.

The answer is no. These HDoS hackers don't need to circumvent the spam filters because the junk email they're sending isn't really spam.

Spam is a situation where a single message is sent to many email addresses. The email used to deliver the message may be personalized, but the message must be the same because the goal of spam is to offer a single product or service to many people.

Spam filters rely on the fact that the same spam email (with possibly minor changes) gets sent to many email accounts.  When the spam filters recognize a pattern for a large amount of email being sent to email accounts monitored by the spam filter they decide this is spam and filter it for all email accounts (unless the mails are from a white listed source).

The junk email used by the HDoS hackers isn't spam – it’s a targeted attack against a single user.  So the Junk email generator used here can generate truly unique emails that are not sent to any other user – and there’s no way for the spam filters to recognize this as spam. Such unique junk emails can be called "One-Time Spam" - and much like one-time pads [Wikipedia], they are invincible.

We've discussed variants of the classic Turing test in previous posts. The classic Turing test is interactive - the evaluator presents various challenges and uses the responses to identify if the responder is a human or a machine. But one can define a non-interactive Turing test in which the evaluator doesn't present any challenges and receives a single message - and needs to evaluate if the source of the message is human or a machine.

It is not difficult to create a machine that can pass a non-interactive Turing test - and this is all a one-time spam generator needs to do.

Alternatively the HDoS hackers can use crowd sourcing - simply intercepting real emails being sent over the internet and using each such email once as a junk email. This is no longer strictly one-time spam, but the chances of a spam filter identifying such a mail as a spam are close to zero - and the HDoS hacker only needs some of his junk emails to pass the filter.

HDoS attacks can't be prevented based on the content of the junk email, but there are other methods that can be used to identify such an attack and filter out the offending mail. The most trivial technique is checking if a mail account is receiving a very large number of emails from a single source address. This is the main technique used to prevent classic DoS attacks - and can similarly be circumvented by using multiple source computers for a distributed denial of service (DDoS) attack. 

HDoS and One-Time Spam are nonregistered trademarks of the Good Enough Security blog :-)

Monday, July 16, 2012

Security QotD #1

From a paper on mobile device user authentication:
Tell me about it.

Sunday, July 15, 2012

Replay attack on Apple In-App Purchases

The Next Web reports of a flaw in the Apple In-App Purchase process which allows the re-use of In-App Purchase receipts. Using this flaw a hacker named Alexey V. Borodin has created an online service that allows people to receive In-App Purchase services without paying for them.

The normal process of authenticating In-App Purchases is as follows:
  1. The client device sends a request to Apple to make the In-App Purchase
  2. Apple charges the client user and returns a digitally signed receipt of the purchase to the client device
  3. The client devices sends the receipt to the application server as a proof of purchase to receive the in-app service
The problem is that the digitally signed receipt is the same for all clients. So if one person makes a purchase and receives a receipt, that person can then distribute the receipt to others who can then get the In-App service without paying Apple. This is what Borodin did.

Now it's not as simple as that. The client device expects to receive the receipt from Apple - not from Borodin. The client device goes to a server with an Apple domain name and authenticates the server using TLS. So how could Borodin impersonate Apple?

The answer lies in the fact that DNS and TLS security in iOS (and in general) are intended to protect client users from a malicious third party - not to protect Apple from malicious clients. Because of this iOS allows the client user to override the DNS server and to add her own CA (Certificate Authority) certificates. So the client user can easily point the client device to Borodin's site instead of Apple's (by replacing the DNS server) and authenticate Borodin's site with Apple's name (using Borodin's CA certificate).

What Apple should have done is make the receipts unique, preferably by including a device unique identifier (such as the iOS UDID or a derivative thereof). If they deemed this problematic due to privacy (which it isn't) then they could had at least included a time stamp to limit the amount of time the receipt could be used or even just a cryptographic nonce [Wikipedia] which could be used by the application server to identify and prevent attempted re-use of an In-App Purchase receipt.

So why didn't Apple do this? My guess is that the Apple engineers who developed the In-App Purchase solution assumed that their connection to the Apple domain is protected (thanks to TLS) and didn't consider the fact that iOS allows users to add CA certificates. The Apple engineers who designed the certificate system which allows this were trying to protect the user from hackers - not to protect Apple from users. This is not the first time Apple have been hit by this mistake.

But even if it weren't possible for users to add CA certificates, using a fixed receipt for multiple devices is just bad security. If you have input on why Apple did this - please comment below.




Thursday, June 14, 2012

Flaming headlines

Readers of this blog know that I can't miss a chance to ridicule hyperbolic headlines. Mocana has a great blog on the "Security news for the Internet of things" but they sometimes overstate the case.

The headline of today's Mocana post is "Researcher: Flame’s Crypto Collision Discovery Cost At Least $200K". The source Mocana gives for this assessment is an Ars Technica post. Reading that post one finds the following statement:
Sotirov said the $200,000 estimate is likely the maximum cost of the computing power that would have been required when Flame was likely being designed. He held out the possibility that the collision attack may have cost much less if the researchers figured out techniques that were less computing intensive.
Notice how Sotirov's "maximum cost" of $200K became "At Least $200K"? Nuff said.

Video Turing tests and face recognition

This post starts off topic, but does get back on topic in the end. Please bear with me.

I assume the readers of this blog know what the classic Turing test [Wikipedia] is. To pass the Turing test a machine must be able to impersonate human responses to such a degree that a human can't distinguish between the machine and a human based on their responses.

The original Turing test is meant to test artificial intelligence. Similar tests could be used to test other artificial entities - such as video.

A Video Turing test (TM) could be defined as follows. Set up a wall with a window and with a video monitor that displays a stream captured by a camera on the other side of the wall.To pass the Video Turing test (VTT), it needs to be impossible for a human to distinguish the window from the monitor.

There are various levels of VTTs. A 2D VTT assumes a static one-eyed viewer. A 3D VTT would also work with a two-eyed viewer, but still static. A virtual reality VTT would also work with a moving viewer.

Friday, February 17, 2012

We are the 99.8%: Premature report of RSA demise


If you haven’t read the paper itself [PDF], you've probably seen the NY Times article. A group of researchers led by Arjen Lenstra analyzed a few million RSA public keys and found that some 0.2% of these keys share at least one prime factor with another public key and can thus be factorized. This is due to a random number generation flaw in the process used to choose primes during the generation of these public keys.

I don’t know if their analysis technique is novel, but it’s definitely interesting. Instead of identifying some weakness in a specific key generation library, or analyzing each key separately, the team analyzed pairs of keys to see if they share a common factor. This is done by finding the GCD (Greatest Common denominator) of each pair. If a pair of keys doesn't share a prime they will have a GCD of 1. By working with a very large number of keys the researchers were able to identify a large number of faulty keys.

So is this a big deal? Not as big as you might think from reading the NY Times article.
Obligatory PHD comic

Thursday, January 12, 2012

Things I haven't been writing about

Due to a hectic period at work (I'm currently trying to patch a cracked standardized security scheme) I haven't managed to post anything lately.  So here's a list of subjects I wanted to write about and a short summary of my thoughts.

Iranians capture of US military drone
I like Richard Langley's theory (quoted by Wired) regarding how the Iranians may have captured the US military drone. Langley proposes that perhaps the Iranians jammed the drone's secured GPS signal which in turn caused the drone to fallback to the generally used clear GPS signal which in turn the Iranians spoofed.
If Langley's theory is correct this is yet another case of functionality (ensuring drones can find their way home) trumping security. A more secure solution would be to rely on the last secure GPS reading and using on-board hardware (e.g. an accelerometer) to estimate a delta on that. Sooner or later the secured GPS signal will come back to correct any navigation errors made due to the estimates.

Android approved by Pentagon
This misleading headline appeared on many sites, including Slashdot. This is of course nonsense - Android as a standardized security system is a security nightmare. A specific device which happens to use Android was approved by the Pentagon. The above headline is equivalent to saying that Linux was approved by the Pentagon ...

Counterfeit chips in US military hardware
If you're interested in this subject, which has come up several times over the last last few years, make sure to read Bunnie's blog post. I have a feeling that a lot of the noise on this subject is coming from western vendors who simply can't compete with the Chinese vendors.

Upcoming posts
I have two big posts that I need to complete - so stay tuned. These are:
  • A summary post on why standardized security systems fail and what needs to be done to make a truly secure standard.
  • An announcement on the publication of version 0.01 of the security failures database. Of course, I need to build the database first :-) Any help would be greatly appreciated - if you can contribute please send a message to my dedicated gmail account (security.fails.db).