On 8 April 2016 at 16:18, Jon Ribbens <jon+python-...@unequivocal.co.uk> wrote:
> I've made another attempt at Python sandboxing, which does something > which I've not seen tried before - using the 'ast' module to do static > analysis of the untrusted code before it's executed, to prevent most > of the sneaky tricks that have been used to break out of past attempts > at sandboxes. > > In short, I'm turning Python's usual "gentleman's agreement" that you > should not access names and attributes that are indicated as private > by starting with an underscore into a rigidly enforced rule: try and > access anything starting with an underscore and your code will not be > run. > > Anyway the code is at https://github.com/jribbens/unsafe > It requires Python 3.4 or later (it could probably be made to work on > Python 2.7 as well, but it would need some changes). > > I would be very interested to see if anyone can manage to break it. > Bugs which are trivially fixable are of course welcomed, but the real > question is: is this approach basically sound, or is it fundamentally > unworkable? > If i'm not mistaken, this breaks out: > exec('open("out", "w").write("a")', {}) because if the second argument of exec does not contain a __builtins__ key, then a copy of the original builtins module is inserted: https://docs.python.org/3/library/functions.html#exec
_______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com