If you haven’t read the paper itself [PDF],
you've probably seen the NY Times article.
A group of researchers led by Arjen Lenstra analyzed a few million RSA public keys
and found that some 0.2% of these keys share at least one prime factor with
another public key and can thus be factorized. This is due to a random number
generation flaw in the process used to choose primes during the generation of
these public keys.
I don’t know if their analysis technique is novel, but it’s definitely
interesting. Instead of identifying some weakness in a specific key generation
library, or analyzing each key separately, the team analyzed pairs of keys to
see if they share a common factor. This is done by finding the GCD (Greatest Common denominator)
of each pair. If a pair of keys doesn't share a prime they will have a GCD of 1. By working with a very large number of keys the researchers were able to
identify a large number of faulty keys.
So is this a big deal? Not as big as you might think from reading
the NY Times article.
Obligatory PHD comic |
For cryptography professionals there’s nothing new here. All cryptographic schemes depend on certain assumptions. One of the assumptions required in pretty much all systems is that keys are generated randomly. If the source of randomness is flawed then the security of the system is reduced.
The fact that 0.2% of RSA keys can be compromised due to such a
flaw is interesting. But I have no doubt that many more systems using RSA keys can
be compromised due to other implementation flaws, as can systems using other digital
signature schemes.
The researchers advise using single secret schemes such as DSA, in
which such an attack can’t be done. But the security of DSA depends on the
generation of a new random value for each signature, which is
much more difficult to achieve than the one-time randomness requirement of key
generation (ask Sony).
The funny thing is that a lot of the hype the paper received is due
to the fact that the researchers did only half of the job. It is quite likely
that the problematic keys they identified were generated by a specific software
library, possibly running in a specific environment or perhaps two such
libraries. Had the researchers completed the work and had identified the source
of these keys then instead of making all RSA keys suspect, they would have
simply stated that RSA keys generated by software X under condition Y are
faulty. There go the headlines …
In fact another team who worked on the same problem in parallel seem to have done a better job in identifying the source of the flawed keys. One gets the feeling that the Lenstra team may have been aware of this other team and hurried to publish the paper prematurely.
So no, RSA is not compromised in any way. The community of security professional users of RSA can stand up and shout:
“We are the 99.8%!” We check ourselves and ensure that our keys are truly
random, just like we check all the assumptions our security systems rely on.
This is yet another reminder that a little crypto knowledge is a
dangerous thing. It’s not enough to call OpenSSL to do your crypto for you. Security professionals are
aware of the assumptions on which the schemes reside and make sure to verify
them for their specific implementation. There are so many pit holes in the way of implementing a good security scheme and non-professionals will invariably fall into at least one.
So for cryptographers this not a big deal. But this could be an interesting opportunity for some hackers.
Not for hackers who target specific targets (since the chances of this attack
being useful against any specific target are miniscule and there are other
attacks that are more likely to be successful) but for hackers who spread a wide
net to catch a few fish.Yes, there are probably easier ways to hack systems, but using a novel technique allows one to work in a competitive free environment.
Great Job !
ReplyDelete