On 20 March 2015 at 05:45, Rishab Nithyanand <rishabn....@gmail.com> wrote: > Hey all, > > I just thought I'd share and get feedback about some recent work from our > team at Stony Brook University.
Interesting, thanks! I do question one of the early assumptions, though: "Many games also include the notion of private games between a limited number of players which may only be accessed using a password. This means that, even a highly motivated adversary (e.g., one who is willing to run a game client themselves) still cannot observe the game state." That seems to be making risky assumptions. Chiefly that the only possible attack is via an external game client - this may be mistaken: an adversary could attack many places: by attacking or subverting the game client software itself, by attacking the game network, by attacking the operator of the game (eg: Blizzard, in the case of WoW, etc), and so on. We shouldn't be surprised to find the likes of the NSA attacking gaming communities, because they are large communities, often overly trusting of their environment (notably the client software), and frequently with central control built in. For example: http://www.propublica.org/documents/item/889134-games You could mitigate some of that, sure. You could choose a less popular game (ie: less targeted), with open source client and server software (though you'd have to review it too, which is probably beyond the skill of most users), which operates in encrypted peer to peer fashion. And you can use behavioural steganography as your paper describes. Keep raising the bar, I guess. But a lot of that sounds like security by obscurity, and a skilled adversary should be able to attack that. Any opsec leak, and that castle would fall down fairly fast, I suspect. Still, fun research. Literally :) -J -- tor-talk mailing list - tor-talk@lists.torproject.org To unsubscribe or change other settings go to https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk