Tuesday, November 24, 2009

Not-a-Bot: Improving Service Availability in the Face of Botnet Attacks

Summary
This paper tackles the problem of separating user generated activity (emails, web requests and clicks) from the same activities being performed by a bot infesting the computer. The authors leverage the Trusted Platform Module (TPM) to start verified and trusted code to perform the necessary operations on the end hosts for their system to work.

The basic idea of the system is that actual human users can't generate all that many events over time. For example, it is very unlikely that a human will want to send more than one email per second (although emails with huge CC lists seem like a problem for the system). With this observation, the Not-A-Bot system proposes to have software request an attest (a token signed with a private key from the TPM) when it wants to perform an action (like send an email) which will check if an action from an input device (keyboard or mouse) has happened in a threshold number of seconds (1 in the paper), and if so, grant the attest that can be included in a header of the request. A verifier on the receiving end can use this attest to be more confident that the message or request is from a human a not a bot.

For added security the content of the message is part of the attest so it is impossible for an attest to be re-used by a bot to send multiple messages. Further, even though bots could simulate input actions they could still only send at the reduced threshold rate.

The authors implement NAB using the Xen hypervisor to prevent access from the untrusted OS into the attester's address space. They strip down Xen to reduce the size of the codebase that needs to be trusted. The paper also covers (in somewhat laborious detail) the usage of the TPM to verify the code size of the attester and to obtain the private key.

To evaluate their system the authors collected activity data from a number of users at Intel and overlay honeypot collected data for spam. They show that NAB can suppress 92% of spam email, 89% of bot generated DDoS requests, and 87% of fraudulent click-throughs. This seems to assume verifiers running everywhere.

Comments
I thought this paper took an interesting approach to solving the bot problem, but in the end I remain somewhat unconvinced of their methodology.

Firstly, even if bots could only send at (say) 1 email per second, there would still be a huge amount of bot generated email flying around given the sheer number of computers infected by bots.

Secondly, incremental deployment of this system seems difficult. If only a few end hosts are using NAB then all traffic needs to be treated as 'normal' anyway, since biasing against unsigned requests would certainly have very high false positive rates. If low numbers of servers are using verifiers then there is little gain to be had by running NAB on an end host and paying the extra costs.

Finally, the whole architecture seems a bit hard to swallow. It seems a bit much to ask users to run a hypervisor under their OS for this purpose and without the hypervisor the system is open to attack. The attester's size is verified at launch, before the host OS, but if the host OS can get at the address space of the attester it could easily be replaced with something that always grants attests, making the system useless.

This wasn't a bad paper, but given that the system seems a bit weak, and the lack of a serious networking component to the paper, I would probably remove this paper.

No comments:

Post a Comment