Hello misc@, I'm researching locking things down, and I'm wondering what the current best practice is for isolating risky programs. It seems this community has traditionally shunned virtualization as a solution, and also called exclusively chrooting "insufficient". Okay, sure.
But what is better then? Say, for example, I'm running firefox, and I don't trust it. Running it as-is straight out of pkg_add doesn't run it as its own user: $ ps -o user,command | grep firefox jpouellet firefox As I understand it, the next time a remote code execution vulnerability comes along, it could, among many many other things, read my ~/.ssh/id_rsa and then it's game over. A chroot or even just a separate user would seem to fix that problem, assuming they couldn't easily break out of it (probably not a safe assumption), but that still leaves many other issues, for example it would still be able to send network traffic originating from my machine, which would be extremely valuable to an attacker. The historical solution (as of 2005) [1] to this seems to have been to use systrace. But then vulnerabilities for that were found (in 2007) [2]. So, unless I'm missing something, it seems that virtualization remains the most wholesome solution, but if that's broken, then we're back at square one! So what do you guys recommend? Should I just chroot a vm who's network traffic all goes through a local filter, and hope for the best? I'm really at a loss for what to do here. Many thanks, Jean-Philippe [1] http://marc.info/?l=openbsd-misc&m=113459984810732&w=2 [2] http://www.watson.org/~robert/2007woot/2007usenixwoot-exploitingconcurrency.pdf

