Showing posts with label Fails. Show all posts
Showing posts with label Fails. Show all posts

Sunday, November 25, 2012

How the PS3 LV0 key was (probably) hacked

If you follow Schneier you may have followed his link to a Hexus.net article entitled "Sony lets slip its PlayStation 3 Master Key", which tells us:
This wouldn't be the first time Sony has leaked important security keys, common to every PlayStation 3 console, however, this is the first time the console's LV0 decryption keys have been let loose in the wild.
So what makes the LV0 keys so special? These are the core security keys of the console, used to decrypt new firmware updates. With these keys in-hand, makers of custom firmwares (CFW) can easily decrypt any future firmware updates released by Sony, remaining a step ahead of the update game; likewise, modifying firmwares and preventing them from talking back to Sony HQ also becomes a much easier task.
Some background. PS3 utilizes a "chain of trust" to ensure that only trusted code runs on the PS3. The chain of trust starts from ROM code which is immutable. This ROM code validates the authenticity of a bootloader which in turn validates the authenticity of the LV0 code which in turn validates the next link in the chain.

A couple of years ago, due to an epic fail by Sony, hackers figured out the root secret key used by Sony to sign code and thus completely shattered the chain of trust.

But Sony still had a trick up their sleeves. The LV0 code isn't only signed - it's also encrypted with an on-chip key "the LV0 decryption key". This allows Sony to constantly download new software to devices which the attackers would need to break without being able to read the software (since it's encrypted). Now a group of hackers have published these LV0 decryption keys.

So how did the hackers get their hands on the LV0 decryption keys? According to the Hexus.net article:
So where has Sony gone wrong and what can the firm do to resolve the issue? Perhaps the most obvious mistake was to allow keys to leak in the first place, which were extracted through a flaw in the console's hypervisor.
"A flaw in the console's hypervisor"? What flaw? And how can such a flaw leak the LV0 keys when they should be long gone by the time the attackers can load their own code?

I suspect that the Hexus.net article is confusing the new LV0 key hack with the original GeoHot attack which indeed circumvented the PS3 hypervisor to gain access to LV0 and LV1 (not the LV0 decryption keys - the decrypted LV0 code itself). This is described in more detail here.

So how did the hackers obtain the LV0 decryption key? Ace hacker Wololo described a possible method on his blog:
For the exploit that we knew about, it would’ve required hardware assistance to repeatedly reboot the PS3 and some kind of flash emulator to set up the exploit with varying parameters each boot, and it probably would’ve taken several hours or days of automated attempts to hit the right combination (basically the exploit would work by executing random garbage as code, and hoping that it jumps to somewhere within a segment that we control – the probabilities are high enough that it would work out within a reasonable timeframe).
This makes a lot of sense. Since the LV0 signing keys were hacked long ago hackers can sign any file as the LV0 code and it will be run by the device. But since they didn't know what the LV0 decryption key is, this file will be decrypted with the LV0 key, becoming a random blob of commands, before being run by the device. Effectively this causes random code to be run on the device.

If you try enough random blobs sooner or later you're going to have one in which the code does something which is good for you as an attacker. In this case, you're going to get to a version of random code in which the code jumps to not encrypted code from some other place in memory with full access to the device memory - including to the LV0 decryption keys themselves.

This is exactly why it isn't enough to encrypt a file - you also need to sign it. Otherwise attackers can feed you random garbage that they can't control - but may be advantageous for their needs.

This is just another example to the old rule: When you have to sign - sign. Don't encrypt.



Of course in this case Sony did sign - they just signed badly and thus lost the signing keys.

Tuesday, October 18, 2011

German government Trojan and collateral damage

As you may have heard, the Chaos Computing Club (CCC) reverse engineered a Trojan written by the German government for the purpose of legal wiretapping.

Though the Trojan itself is legal in Germany (as long as it's only installed according to court orders), the CCC unveiled a few embarrassing facts about the Trojan.

The list of issues is long but can be summarized by one point - the Trojan developers didn't make any significant effort to prevent other parties from utilizing the Trojan for their own purposes.

One of the reasons security systems fail is because the designers of the system focused on a single adversary and didn't consider others. In this case it's likely that the designers of this Trojan were focused on ensuring that the Trojan's targets wouldn't identify and remove the Trojan. They didn't realize that their most formidable adversaries aren't the targets but members of the hacker community who are happy for an opportunity to embarrass the government.

More importantly, since the Trojan developers' goal was to "attack" their targets, they didn't realize that at the same time they were still obligated to prevent undue damage to them.

This isn't the first time a security system failed in such a way. Perhaps the most famous case is the Sony Copy Protection rootkit (Wikipedia), which some consider to have been, when revealed, the final nail in the coffin of copy protecting music CDs.

For a security system to succeed it must not cause undue damage. Anything but the tiniest amount of collateral damage is unacceptable and is likely to bring the downfall of the system.

Following the announcement from CCC several anti-virus developers have announced that due to the collateral damage they will be treating this Trojan as malware. The German government developers will need to come up with something new - perhaps they should ask the CCC for some tips.

Tuesday, October 4, 2011

GSM A5/1: (Sub)Standard Security pt.2

GSM is a widely deployed standard for cellular communications, including security aspects. This post will describe one aspect of the GSM security architecture, how GSM security has been hacked and why.

The GSM standard deals with two main security concerns - payment and privacy. The first goal is to ensure that the person making a call pays for it. The second goal is to prevent unauthorized parties from accessing communications over the GSM network. This post will concentrate on the second area - privacy.
Cell phone bug?
The initial GSM standard, published in 1990, stipulates the usage of an algorithm called A5/1 for scrambling GSM voice communications. A5/1 has two important characteristics: it uses a 64-bit key and was intended to be kept secret.

Keeping an algorithm implemented by dozens of device manufacturers secret is good for as long as it lasts - which isn't very long. A5/1 remained secret for a few years, but was fairly quickly reverse engineered and was published on the Internet in 1999.

Cryptanalysts found several weaknesses in the A5/1 algorithm - but none as significant as the fact that the algorithm uses a 64-bit key.

Using a 64-bit key to encrypt data is fine as long as one of the following conditions is true:
  1. You're living in the 20th century.
  2. You're living in the early 21st century and the data secured by any specific key is not very valuable and there is no single known plain-text encrypted with each key

Thursday, September 15, 2011

HDCP: (Sub)Standard Security pt.1

I owe the readers of this blog an explanation (or two).  I promised to explain "Why Security Systems Fail" and so far, after more than a month, there was only one such post (on RSA SecurID).

To make up for this I'll do a series of posts on a group of security systems describing how and why they were breached. What these systems have in common is that they were each defined as a "standard" - i.e. a specification for the security system was published and was implemented by multiple parties. The first post in the series is dedicated to HDCP. Subsequent posts will cover GSM, X.509 certificates and others.

Tuesday, August 16, 2011

The seven laws of security engineering

There are a few laws in the field of security engineering that impact many aspects of the discipline. Most of these laws are self evident and well known, but the application of these laws to real world situations is difficult. In fact most security failures in the field can be traced to one or more of these laws.
Following is a list of seven such laws with a short description of each law. Future posts will elaborate on these laws (and others) as part of an analysis of specific cases.
You might ask a security engineer if a certain system is secure. If they give you an answer which sounds evasive and noncommittal that’s good – otherwise they’re not telling you the whole truth.
Because the truth is that no system is 100% secure in and of itself. The most a security engineer can say is that under certain assumptions the system is secure.
Dilbert.com