While going over various examples of failed standardized security systems I realized it would be very useful to have an open online database of security system failures that could be used by security professionals to analyze the root cause of such failures and learn how to prevent them in the future. I was thinking "Someone" should build such a database. Not knowing who this "Someone" could possibly be I left it at that.
Thankfully, Matthew Green has prepared a short list covering a subset of such failures - specifically cryptography flaws that were exploited by non-government attackers.
Schneier wonders what may have been the motive for the Illinois Water Utility attack. I don't usually go for conspiracy theories, but in this case my bets are on this hack being the work of a SCADA security provider.
Why do I think this was done by a security provider? Because no-one else has the motive to perform such an attack and cause actual damage to the water utility.
Some guys from mobile app company Applidium have reverse engineered the protocol used by Siri to send requests to Apple servers (e.g. a snippet of audio) and get a response (e.g. text version of said audio snippet).
It seems Apple haven't done much to prevent other parties from using this service. Though the protocol includes a device ID there is nothing in the protocol to authenticate that the client is indeed the device it identifies as so one can extract the device ID from an iPhone and use it on any other device.
I would expect Apple to update the Siri protocol to include some form of device authentication. Apple have several keys built into the iPhone they could use for this.
Updated November 17th to add:
Some of the reports on this "hack" are missing the point. For example, the Mocana blog headlines a report of this with "SSL Hack Leads To Siri Clones" and explains that this occurred because "Apple didn’t set its SSL connection correctly".
Though it is true that the Applidium researchers circumvented HTTPS encryption to analyze the protocol, the purpose of the HTTPS protocol isn't to present unauthorized access to web services, but to protect the privacy of people who use such web services. So in this case the purpose of HTTPS is to prevent attackers from gaining access to iPhone owners' data. The Applidium "hack", in which they added their own root certificate to their device, doesn't compromise this because only the iPhone owner can add such a certificate.
One might claim that Apple used HTTPS to keep the Siri protocol secret and thus prevent others from using the Siri servers. I don't think this is the case. Even if Apple had fully protected the communication with the servers, any component hacker could have easily reverse engineered the protocol from Siri's code - so I don't think Apple could have possibly relied on the secrecy of the protocol.
If Apple want to prevent unauthorized devices from using the Siri servers, they really need the protocol to include some method for the servers to authenticate the device. This shouldn't be too difficult for Apple to do as they have keys built into the iPhone for such purposes.
In the previous post I blamed Android's security woes on the device vendors' lack of motivation to make their devices secure despite Google's efforts to maintain security. If the latest reports of a hole in Android's face recognition mechanism are correct this is a breach that falls squarely into Google's lap.
Android 4.0 (code named Ice Cream Sandwich) includes Face Unlock, a feature which, if activated, allows the owner to unlock a device just by presenting his or her face to the device's camera. Cool, right?
Malaysian cell phone blog SoyaCincau posted the following video on youtube which shows that it's possible to use a photograph of the owner's face to unlock the phone.