Tuesday, November 29, 2011

History of Security Failures 101: Crypto Flaws

While going over various examples of failed standardized security systems I realized it would be very useful to have an open online database of security system failures that could be used by security professionals to analyze the root cause of such failures and learn how to prevent them in the future. I was thinking "Someone" should build such a database. Not knowing who this "Someone" could possibly be I left it at that.

Thankfully, Matthew Green has prepared a short list covering a subset of such failures - specifically cryptography flaws that were exploited by non-government attackers.

The following is from Matthew Green's blog:

Monday, November 21, 2011

The Illinois Water Utility attack: Whodunit?

Schneier wonders what may have been the motive for the Illinois Water Utility attack. I don't usually go for conspiracy theories, but in this case my bets are on this hack being the work of a SCADA security provider.

Why do I think this was done by a security provider? Because no-one else has the motive to perform such an attack and cause actual damage to the water utility.

Tuesday, November 15, 2011

Siri again

Some guys from mobile app company Applidium have reverse engineered the protocol used by Siri to send requests to Apple servers (e.g. a snippet of audio) and get a response (e.g. text version of said audio snippet).

It seems Apple haven't done much to prevent other parties from using this service. Though the protocol includes a device ID there is nothing in the protocol to authenticate that the client is indeed the device it identifies as so one can extract the device ID from an iPhone and use it on any other device.

I would expect Apple to update the Siri protocol to include some form of device authentication. Apple have several keys built into the iPhone they could use for this.

Updated November 17th to add:
Some of the reports on this "hack" are missing the point. For example, the Mocana blog headlines a report of this with "SSL Hack Leads To Siri Clones" and explains that this occurred because "Apple didn’t set its SSL connection correctly".

Though it is true that the Applidium researchers circumvented HTTPS encryption to analyze the protocol, the purpose of the HTTPS protocol isn't to present unauthorized access to web services, but to protect the privacy of people who use such web services. So in this case the purpose of HTTPS is to prevent attackers from gaining access to iPhone owners' data. The Applidium "hack", in which they added their own root certificate to their device, doesn't compromise this because only the iPhone owner can add such a certificate.

One might claim that Apple used HTTPS to keep the Siri protocol secret and thus prevent others from using the Siri servers. I don't think this is the case. Even if Apple had fully protected the communication with the servers, any component hacker could have easily reverse engineered the protocol from Siri's code - so I don't think Apple could have possibly relied on the secrecy of the protocol.

If Apple want to prevent unauthorized devices from using the Siri servers, they really need the protocol to include some method for the servers to authenticate the device. This shouldn't be too difficult for Apple to do as they have keys built into the iPhone for such purposes.

Saturday, November 12, 2011

The face that unlocked a thousand Androids

In the previous post I blamed Android's security woes on the device vendors' lack of motivation to make their devices secure despite Google's efforts to maintain security. If the latest reports of a hole in Android's face recognition mechanism are correct this is a breach that falls squarely into Google's lap.

Android 4.0 (code named Ice Cream Sandwich) includes Face Unlock, a feature which, if activated, allows the owner to unlock a device just by presenting his or her face to the device's camera. Cool, right?

Malaysian cell phone blog SoyaCincau posted the following video on youtube which shows that it's possible to use a photograph of the owner's face to unlock the phone.

Friday, November 4, 2011

Wednesday, October 26, 2011

XML encryption standard hack

I started writing a post about the XML encryption hack, but in the course of researching the topic I came across this great post by Matthew Green on his "A Few Thoughts on Cryptographic Engineering" blog. Matthew's post covers pretty much everything I wanted to say and more - so I'll forgo my post and let you read all about it there.

Tuesday, October 25, 2011

Blogroll: Top security news sources

To the right (that's your right - my left) you'll see a new field - "Security RSS/Blog Roll". This is a list of security news sources I read (mostly via RSS).

Over the years I've looked for a good blogroll on security subjects that interest me. There is no lack of sources that cover standard IT security subjects like network penetration and malware. But there are much fewer sources on designing secure systems and securing devices. Since I haven't managed to find such a blogroll anywhere I've decided to create my own.

Following is a short description of these sources - the best I'm aware of.

Sunday, October 23, 2011

DuQu: A Malware Rashomon*

No self-respecting security blog can ignore [insert dramatic music] ...

DuQu: Son of Stuxnet
G.E.S. original art

Friday, October 21, 2011

Doctor, is it Siri-ous?

Last week the web was full of reports of a "security hole" in Siri, Apple's new voice control mechanism for the iPhone. ZDNet went so far as to headline "Siri not so serious about security".

So what's the "security hole"? The default iPhone configuration is such that Siri is active even when the iPhone is locked with a passcode. This means that a person with access to your supposedly secure locked iPhone can, for example, send emails from your phone using Siri. It's not difficult to understand the kind of attacks that this would allow someone impersonating you, not to mention the prank potential.

Tuesday, October 18, 2011

HTTPS: (Sub)Standard Security pt.3

In the first post in this series on the deficiencies of standardized security systems I promised a post on "X.509 certificates". By this I intended to discuss the commonly used system for authenticating and securing communications with web sites, widely known as SSL. As SSL (or to be precise, TLS) is just one component of this system (and is also used for other purposes) I will use the term "HTTPS system", though in fact the same system is used for more than just the HTTPS protocol.

German government Trojan and collateral damage

As you may have heard, the Chaos Computing Club (CCC) reverse engineered a Trojan written by the German government for the purpose of legal wiretapping.

Though the Trojan itself is legal in Germany (as long as it's only installed according to court orders), the CCC unveiled a few embarrassing facts about the Trojan.

The list of issues is long but can be summarized by one point - the Trojan developers didn't make any significant effort to prevent other parties from utilizing the Trojan for their own purposes.

One of the reasons security systems fail is because the designers of the system focused on a single adversary and didn't consider others. In this case it's likely that the designers of this Trojan were focused on ensuring that the Trojan's targets wouldn't identify and remove the Trojan. They didn't realize that their most formidable adversaries aren't the targets but members of the hacker community who are happy for an opportunity to embarrass the government.

More importantly, since the Trojan developers' goal was to "attack" their targets, they didn't realize that at the same time they were still obligated to prevent undue damage to them.

This isn't the first time a security system failed in such a way. Perhaps the most famous case is the Sony Copy Protection rootkit (Wikipedia), which some consider to have been, when revealed, the final nail in the coffin of copy protecting music CDs.

For a security system to succeed it must not cause undue damage. Anything but the tiniest amount of collateral damage is unacceptable and is likely to bring the downfall of the system.

Following the announcement from CCC several anti-virus developers have announced that due to the collateral damage they will be treating this Trojan as malware. The German government developers will need to come up with something new - perhaps they should ask the CCC for some tips.

Tuesday, October 4, 2011

GSM A5/1: (Sub)Standard Security pt.2

GSM is a widely deployed standard for cellular communications, including security aspects. This post will describe one aspect of the GSM security architecture, how GSM security has been hacked and why.

The GSM standard deals with two main security concerns - payment and privacy. The first goal is to ensure that the person making a call pays for it. The second goal is to prevent unauthorized parties from accessing communications over the GSM network. This post will concentrate on the second area - privacy.
Cell phone bug?
The initial GSM standard, published in 1990, stipulates the usage of an algorithm called A5/1 for scrambling GSM voice communications. A5/1 has two important characteristics: it uses a 64-bit key and was intended to be kept secret.

Keeping an algorithm implemented by dozens of device manufacturers secret is good for as long as it lasts - which isn't very long. A5/1 remained secret for a few years, but was fairly quickly reverse engineered and was published on the Internet in 1999.

Cryptanalysts found several weaknesses in the A5/1 algorithm - but none as significant as the fact that the algorithm uses a 64-bit key.

Using a 64-bit key to encrypt data is fine as long as one of the following conditions is true:
  1. You're living in the 20th century.
  2. You're living in the early 21st century and the data secured by any specific key is not very valuable and there is no single known plain-text encrypted with each key

Thursday, September 22, 2011

Two new side channels

Side channel is the security term for using side effects to glean information that someone is trying to hide.

We all use such side channels in our day to day life. We can tell someone is nervous from their body language. We might find a hiding person thanks to their protruding shadow.

Security systems generally rely on certain data, usually keys, being kept secret. Sometimes, though the key is stored securely, a side effect of the usage of the key can be used to reveal the key. A classic example of this are power analysis attacks which utilize a device's power consumption when performing cryptographic operations with a key to deduce the value of that key.

Two novel side channel attacks have been recently announced. Both of these attacks aren't too practical - but they are quite interesting.

Schneier links to a paper that shows how a mobile device's motion sensors can be used to identify a password being typed in to the device touch screen. When you press any key on the device you move the device in a particular way that is unique to that key. So if you're running my app on your device I can use the device's motion sensors to get the bank account password that you typed in another app. This attack isn't very practical yet (the information isn't accurate enough) but it's very cool.

The H sites another paper (in German) that describes how electrical power usage can be used to detect which program you're watching on TV. Previous papers showed how power usage can be used to glean information on a person's routine, but I believe this is the first time someone has used this to determine such details as which movie you've been watching. Scary.


Sunday, September 18, 2011

HDCP: Cool New Hack

In the previous post I mentioned the fact that the HDCP master root key was publicly revealed about a year ago. Last week Nate Lawson, on his root labs rdist blog, pointed out that the Chumby NeTV is probably the first commercial use of these leaked keys - and it's a very cool hack indeed.

The Chumby Wiki describes the NeTV as follows.
NeTV is designed to work as an add-on to video sources like Boxee, Revue, Roku, PS3, Xbox360, DVR, DVD, set top boxes, etc. It sits between these devices and the TV. NeTV's key benefit is adding push delivery of personalized internet news on top of all platforms in a non-intrusive and always-on manner.
When I first saw mention of the Chumby NeTV (on TechCrunch) I wondered how it could work with HDCP secured content but I didn't take the time to consider the question more in depth. Nate did - and came back with a surprising answer.

According to Nate, the NeTV uses the HDCP master root key to derive the unique key set of the two devices it's connected to (the video source and the television) and calculate the key used by the video source to encrypt the content. It then uses this to key not to decrypt the content but to replace parts of the video images with it's own (encrypted) overlay data.

At first glance you may wonder why the developers of the NeTV didn't simply generate their own unique key set (based on the master root key) and used that to decrypt the video from the video source - why go to the trouble of replacing parts of the encrypted video stream?

But by doing so the NeTV developer solved several two possible issues:
  1. If the NeTV were to decrypt the HDCP protected signal from the video source it may have been in violation of the DMCA. By not decrypting this signal Chumby reduced the risk of being sued.
  2. If the NeTV were to use it's own key set, not generated by the HDCP licensing authority, the NeTV may have been taken out of service by some future countermeasure against illegal devices with unauthorized HDCP keys (e.g. through device revocation). Using the TV and video source's own keys prevents this.
So who is this ingenious Chumby NeTV developer?  No other than Bunnie Huang of Hacking the XBOX fame. Once a master - always a master.



Thursday, September 15, 2011

HDCP: (Sub)Standard Security pt.1

I owe the readers of this blog an explanation (or two).  I promised to explain "Why Security Systems Fail" and so far, after more than a month, there was only one such post (on RSA SecurID).

To make up for this I'll do a series of posts on a group of security systems describing how and why they were breached. What these systems have in common is that they were each defined as a "standard" - i.e. a specification for the security system was published and was implemented by multiple parties. The first post in the series is dedicated to HDCP. Subsequent posts will cover GSM, X.509 certificates and others.

Wednesday, September 7, 2011

DigiNotar: When is a secure network not secure?

The Dutch government report (PDF) on the DigiNotar hack has confirmed what I suspected:
The separation of critical components was not functioning or was not in place. We have strong indications that the CA-servers, although physically very securely placed in a tempest proof environment, were accessible over the network from the management LAN.
These guys at DigiNotar are living in the nineties. These days the most important attack vector by far is through the network and not physical access. DigiNotar, like many others, invested more effort in defending against the less important attack.

But don't mock them. If you use a disk encryption technology like PointSec or PGP Disk and think it gives you any signficant protection, you may be making the same mistake - assuming an attack involving physical access. It's quite likely hackers already have control of your computer even though it's physically in your possession. You should do what you can to prevent network-based attacks (firewall, anti-virus), but even then you must not assume you're 100% secure. If you have anything that is truly secret just don't put it on a computer you connect to the Internet.

There's been a paradigm shift in the world of corporate security. Instead of traveling and trying to physically access the information of a single company, hackers can use technologies like Remote Access Trojans to attempt attacks on hundreds of companies from the comfort of their own home and with less risk of getting caught by law enforcement. Too many security teams, not just RSA and DigiNotar, haven't yet fully adjusted to this situation.

BTW, the full paragraph in the report begins with another sentence:
The most critical servers contain malicious software that can normally be detected by anti-virus software. The separation of critical components was not functioning or was not in place. We have strong indications that the CA-servers, although physically very securely placed in a tempest proof environment, were accessible over the network from the management LAN.
Which reminds me yet again of this XKCD classic:


Wednesday, August 31, 2011

DigiNotar: Intruder issued fake certificates

Dutch certificate authority DigiNotar revealed that the fake Google certificates signed by them were due to an intrusion into their system. The didn't give any details on how this was done.

I would assume the attacker didn't physically enter DigiNotar's facilities but instead accessed their network through the Internet. If so, this is yet another case of a security system being breached because the owner did not keep highly sensitive assets properly segregated from computers with access to the open internet.  RSA are not alone.

Or as Randall Munroe of XKCD puts it:

Tuesday, August 30, 2011

Security paradigm shift: Glitching the XBOX 360

In the mid-90s a new kind of attack hit the smart card industry - the glitch attack. Marcus Kuhn and Ross Anderson described this attack in a paper from 1996:
Power and clock transients can also be used in some processors to affect the decoding and execution of individual instructions. Every transistor and its connection paths act like an RC element with a characteristic time delay; the maximum usable clock frequency of a processor is determined by the maximum delay among its elements. Similarly, every flip-flop has a characteristic time window (of a few picoseconds) during which it samples its input voltage and changes its output accordingly. This window can be anywhere inside the specified setup cycle of the flip-flop, but is quite fixed for an individual device at a given voltage and temperature.
So if we apply a clock glitch (a clock pulse much shorter than normal) or a power glitch (a rapid transient in supply voltage), this will affect only some transistors in the chip. By varying the parameters, the CPU can be made to execute a number of completely different wrong instructions, sometimes including instructions that are not even supported by the microcode. Although we do not know in advance which glitch will cause which wrong instruction in which chip, it can be fairly simple to conduct a systematic search.
Glitch attacks undermine what is perhaps the most common and fundamental assumption (law #1) of secure computing - that software will always run as it was coded. The breaking of this paradigm created a situation in the late 90s in which pretty much every smart card in the world could be hacked at a low cost. It took the smart card industry several years to design and deploy new hardware that was resistant to this kind of attack (solutions include various forms of input regulators, filters and detectors). Attempts to prevent this form of attack using special secure coding techniques proved ineffective - you can't really solve a security hole using software if you can't rely on the software running as coded.

Yesterday a hacker named gligli announced that he performed such a glitch attack on the Xbox 360, the details of which he described in a Free60.org wiki:
We found that by sending a tiny reset pulse to the processor while it is slowed down does not reset it but instead changes the way the code runs, it seems it's very efficient at making bootloaders memcmp functions always return "no differences". memcmp is often used to check the next bootloader SHA hash against a stored one, allowing it to run if they are the same. So we can put a bootloader that would fail hash check in NAND, glitch the previous one and that bootloader will run, allowing almost any code to run.
As far as I know this is the first successful glitch attack performed on a consumer electronics device; first but not last. Consumer electronics (CE) device security engineers have never made the paradigm shift to a world with glitch attacks. It is quite likely that a similar glitch attack can be done on other CE devices be they other game consoles, cell phones or tablets. This is really big news and it's surprising that it hasn't made more waves in the security blog-sphere.

One of the reasons CE devices have been considered less susceptible to glitch attacks is that they run at a much higher clock frequency (many smart cards run at under 5 MHz while modern CE devices run at over 200 MHz). The Xbox 360 helpfully has a signal that allows hackers to reduce the clock frequency to 520 KHz - making glitch attacks much easier. If the Xbox security team had more smart card design experience they probably wouldn't have made such a mistake.

Monday, August 29, 2011

Who owns your identity? The UDID in iOS 5

As you may have read, Apple has removed from iOS 5 the API that allows applications to access the UDID (Unique Device IDentifier). The UDID is a non-modifiable unique ID given to each iOS device and is used by many applications to identify a specific device.

Why is Apple preventing application developers from accessing the UDID? Two possible security reasons for this are to protect user privacy and to protect the secrecy of the UDID.

User privacy: Though the UDID identifies the device and not the user, many applications have access to both the device UDID and the user's identity. Such applications could be used to create a mapping of users to UDID which could then be used to identify users using applications that don't have access to the User's ID. For example, let's say a user has both a Facebook app and a porn app on their iOS device. Assuming the porn app doesn't require the user to enter any personal identifier, the user can assume that the distributors of the porn app don't know his identity. But since the Facebook app has access to both the UDID and the user's Facebook account, it is possible (for Facebook) to generate a mapping of UDIDs to Facebook accounts and this could subsequently be used by the porn app distributor to identify users of the porn app.

UDID secrecy: There is reason to believe that some Apple applications use the UDID as a secret value. Apple cannot rely on other applications with access to the UDID to keep this value a secret.

A conspiracy theorist might think that in fact the reason Apple is doing this is to keep the UDID to itself. As TechCrunch notes:
If Apple does continue to use UDID for itself but denies it to developers that would be an “extremely lopsided change.” It would give Game Center and iAds yet one more advantage over competing third-party services.
In this case I'm voting with the conspiracists. If Apple wanted to protect the secrecy of the UDID or to protect user privacy they could have easily replaced the current UDID API with a new API that would still give the application developers a reliable unique identifier of the device but wouldn't compromise the secrecy of the UDID nor the privacy of iOS users. Specifically, Apple could have done this by replacing the UDID API with an API that returns a one-way function hash value of the UDID which is keyed using an application specific identifier (e.g. the App ID).

I'm sure the security engineers at Apple are smart enough to have thought of this. The fact that they didn't do so but simply removed the UDID API tells me that their goal was not to enhance security but to gain exclusivity on the device identity. Having such exclusive ownership of devices' identity gives Apple a great advantage in the development of identity-dependent applications including DRM.

Sometimes security is just an excuse for increasing control.

Thursday, August 25, 2011

CAPTCHAs and the Robot in the Middle attack

A CAPTCHA is a visual test of humanity used to prevent machines from performing an operation that is intended to be performed only by people. Many internet services use this to prevent mass automatic access to their services. For example, Google requires anyone registering a Gmail account to pass a CAPTCHA test.

The most commonly used CAPTCHA is a request to identify letters that are presented on the screen in a form which is difficult for OCR software to identify - see example on the right.

One way to circumvent CAPTCHAs is to use a Robot-in-the-Middle attack.

Thursday, August 18, 2011

Two bit attack reduces security effectiveness of AES by 70%!

Now how's that for a sensational headlline? And it's true. A paper released today presents an attack to reduce the computational complexity of brute forcing an AES-128 key to 2 by the power of 126.1 - which means such an attack would take only 30% of the time it would take to do the full 2 by the power of 128 exhaustive search. Similar reductions of about 2 bits are presented for AES-192 and AES-256.

Of course this attack doesn't have any practical impact - such an attack is still completely infeasible - but (as The H writes) it's a first dent in the full AES in other ten years of intensive crytanlysis.

In American slang "two bit" means insignificant - so I guess one could call this a two-bit attack.

GPRS hacked: Who cares?

In case you weren't paying attention last week - Karsten Nohl and friends cracked the GPRS encryption scheme.

In this Forbes interview with Karsten the interviewer tried to get an answer on why the encryption scheme for GPRS was made weaker than that of the earlier GSM voice encryption scheme (A5/1 - which demanded much more effort to crack).

One point I didn't see mentioned is that fact that data communicated over GPRS can easily be encrypted at the application level, while voice is generally only secured at the GSM level. No serious security engineer would rely on the unkown propriety GPRS encryption for securing sensitive data communications over GPRS when they can always add there own application level encryption. If you know of a system that does rely on the GPRS encryption for such data - please leave a note in the comments below.

Blackhat US 2011: Impressions

I attended my first BlackHat conference a couple of weeks ago in Las Vegas. It was an interesting experience and I thought I’d share some of my thoughts.

Tuesday, August 16, 2011

The RSA SecurID debacle: Why it happened

The RSA SecurID saga was one of the more interesting security stories of 2011. Analyzing the background of this story can give some insight as to how security decisions are taken and why security systems fail.

The seven laws of security engineering

There are a few laws in the field of security engineering that impact many aspects of the discipline. Most of these laws are self evident and well known, but the application of these laws to real world situations is difficult. In fact most security failures in the field can be traced to one or more of these laws.
Following is a list of seven such laws with a short description of each law. Future posts will elaborate on these laws (and others) as part of an analysis of specific cases.
You might ask a security engineer if a certain system is secure. If they give you an answer which sounds evasive and noncommittal that’s good – otherwise they’re not telling you the whole truth.
Because the truth is that no system is 100% secure in and of itself. The most a security engineer can say is that under certain assumptions the system is secure.
Dilbert.com