Robert O'Callahan wrote:
> I also think that sandboxing the engine is not interesting. Assuming
> you're talking about OS-level process sandboxing, there's no risk
> there; we know browser engines can be sandboxed that way.

Sandboxing affects the design and implementation of networking, iframes, form 
submission, cross window communication (window.location.href, window.history.*, 
window.postMessage(), etc.), navigation (e.g. clicking on a link to navigate to 
another page), input fields, and graphics (especially things that use OpenGL 
and equivalent).

And, also, I don't know if the rendering engine in servo takes "foreign" 
objects into consideration already, but it is good for it to know that it will 
have to deal with things it cannot draw itself. (Again, it depends on the 
sandboxing model.)

The current way Google and some others do sandboxing is "process per tab," but 
at the end of the day "tab" is the wrong boundary for isolation. For example, 
if I am at attacker.com and then I go to mybank.com in the same tab (perhaps by 
simply clicking a link), the sandbox is not as useful as we'd like it to be, if 
it results in mybank.com being loaded into the process that attacker.com 
booby-trapped.

Given that networking and link navigation is so fundamental, I think it is 
useful to plan ahead at least a little bit. Sandboxing cannot help enforce 
same-origin policy effectively if networking is in the child (rendering) 
processes, for example. In general, in a well-sandboxed browser, process 
startup, teardown, and backgrounding would be designed to be very cheap and 
there would be less emphasis on in-process caches of various kinds. In a 
browser optimized for not having a sandbox, it would be more the opposite.

[I read all the previous threads on sandboxing on dev-servo I could find.]

Cheers,
Brian
_______________________________________________
dev-servo mailing list
dev-servo@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-servo

Reply via email to