Showing posts with label Assumptions. Show all posts
Showing posts with label Assumptions. Show all posts

Friday, February 17, 2012

We are the 99.8%: Premature report of RSA demise


If you haven’t read the paper itself [PDF], you've probably seen the NY Times article. A group of researchers led by Arjen Lenstra analyzed a few million RSA public keys and found that some 0.2% of these keys share at least one prime factor with another public key and can thus be factorized. This is due to a random number generation flaw in the process used to choose primes during the generation of these public keys.

I don’t know if their analysis technique is novel, but it’s definitely interesting. Instead of identifying some weakness in a specific key generation library, or analyzing each key separately, the team analyzed pairs of keys to see if they share a common factor. This is done by finding the GCD (Greatest Common denominator) of each pair. If a pair of keys doesn't share a prime they will have a GCD of 1. By working with a very large number of keys the researchers were able to identify a large number of faulty keys.

So is this a big deal? Not as big as you might think from reading the NY Times article.
Obligatory PHD comic

Thursday, September 15, 2011

HDCP: (Sub)Standard Security pt.1

I owe the readers of this blog an explanation (or two).  I promised to explain "Why Security Systems Fail" and so far, after more than a month, there was only one such post (on RSA SecurID).

To make up for this I'll do a series of posts on a group of security systems describing how and why they were breached. What these systems have in common is that they were each defined as a "standard" - i.e. a specification for the security system was published and was implemented by multiple parties. The first post in the series is dedicated to HDCP. Subsequent posts will cover GSM, X.509 certificates and others.

Wednesday, September 7, 2011

DigiNotar: When is a secure network not secure?

The Dutch government report (PDF) on the DigiNotar hack has confirmed what I suspected:
The separation of critical components was not functioning or was not in place. We have strong indications that the CA-servers, although physically very securely placed in a tempest proof environment, were accessible over the network from the management LAN.
These guys at DigiNotar are living in the nineties. These days the most important attack vector by far is through the network and not physical access. DigiNotar, like many others, invested more effort in defending against the less important attack.

But don't mock them. If you use a disk encryption technology like PointSec or PGP Disk and think it gives you any signficant protection, you may be making the same mistake - assuming an attack involving physical access. It's quite likely hackers already have control of your computer even though it's physically in your possession. You should do what you can to prevent network-based attacks (firewall, anti-virus), but even then you must not assume you're 100% secure. If you have anything that is truly secret just don't put it on a computer you connect to the Internet.

There's been a paradigm shift in the world of corporate security. Instead of traveling and trying to physically access the information of a single company, hackers can use technologies like Remote Access Trojans to attempt attacks on hundreds of companies from the comfort of their own home and with less risk of getting caught by law enforcement. Too many security teams, not just RSA and DigiNotar, haven't yet fully adjusted to this situation.

BTW, the full paragraph in the report begins with another sentence:
The most critical servers contain malicious software that can normally be detected by anti-virus software. The separation of critical components was not functioning or was not in place. We have strong indications that the CA-servers, although physically very securely placed in a tempest proof environment, were accessible over the network from the management LAN.
Which reminds me yet again of this XKCD classic:


Tuesday, August 30, 2011

Security paradigm shift: Glitching the XBOX 360

In the mid-90s a new kind of attack hit the smart card industry - the glitch attack. Marcus Kuhn and Ross Anderson described this attack in a paper from 1996:
Power and clock transients can also be used in some processors to affect the decoding and execution of individual instructions. Every transistor and its connection paths act like an RC element with a characteristic time delay; the maximum usable clock frequency of a processor is determined by the maximum delay among its elements. Similarly, every flip-flop has a characteristic time window (of a few picoseconds) during which it samples its input voltage and changes its output accordingly. This window can be anywhere inside the specified setup cycle of the flip-flop, but is quite fixed for an individual device at a given voltage and temperature.
So if we apply a clock glitch (a clock pulse much shorter than normal) or a power glitch (a rapid transient in supply voltage), this will affect only some transistors in the chip. By varying the parameters, the CPU can be made to execute a number of completely different wrong instructions, sometimes including instructions that are not even supported by the microcode. Although we do not know in advance which glitch will cause which wrong instruction in which chip, it can be fairly simple to conduct a systematic search.
Glitch attacks undermine what is perhaps the most common and fundamental assumption (law #1) of secure computing - that software will always run as it was coded. The breaking of this paradigm created a situation in the late 90s in which pretty much every smart card in the world could be hacked at a low cost. It took the smart card industry several years to design and deploy new hardware that was resistant to this kind of attack (solutions include various forms of input regulators, filters and detectors). Attempts to prevent this form of attack using special secure coding techniques proved ineffective - you can't really solve a security hole using software if you can't rely on the software running as coded.

Yesterday a hacker named gligli announced that he performed such a glitch attack on the Xbox 360, the details of which he described in a Free60.org wiki:
We found that by sending a tiny reset pulse to the processor while it is slowed down does not reset it but instead changes the way the code runs, it seems it's very efficient at making bootloaders memcmp functions always return "no differences". memcmp is often used to check the next bootloader SHA hash against a stored one, allowing it to run if they are the same. So we can put a bootloader that would fail hash check in NAND, glitch the previous one and that bootloader will run, allowing almost any code to run.
As far as I know this is the first successful glitch attack performed on a consumer electronics device; first but not last. Consumer electronics (CE) device security engineers have never made the paradigm shift to a world with glitch attacks. It is quite likely that a similar glitch attack can be done on other CE devices be they other game consoles, cell phones or tablets. This is really big news and it's surprising that it hasn't made more waves in the security blog-sphere.

One of the reasons CE devices have been considered less susceptible to glitch attacks is that they run at a much higher clock frequency (many smart cards run at under 5 MHz while modern CE devices run at over 200 MHz). The Xbox 360 helpfully has a signal that allows hackers to reduce the clock frequency to 520 KHz - making glitch attacks much easier. If the Xbox security team had more smart card design experience they probably wouldn't have made such a mistake.

Tuesday, August 16, 2011

The RSA SecurID debacle: Why it happened

The RSA SecurID saga was one of the more interesting security stories of 2011. Analyzing the background of this story can give some insight as to how security decisions are taken and why security systems fail.

The seven laws of security engineering

There are a few laws in the field of security engineering that impact many aspects of the discipline. Most of these laws are self evident and well known, but the application of these laws to real world situations is difficult. In fact most security failures in the field can be traced to one or more of these laws.
Following is a list of seven such laws with a short description of each law. Future posts will elaborate on these laws (and others) as part of an analysis of specific cases.
You might ask a security engineer if a certain system is secure. If they give you an answer which sounds evasive and noncommittal that’s good – otherwise they’re not telling you the whole truth.
Because the truth is that no system is 100% secure in and of itself. The most a security engineer can say is that under certain assumptions the system is secure.
Dilbert.com