On Wed, Aug 3, 2016 at 9:22 AM, Till Schneidereit <t...@tillschneidereit.net > wrote:
> On Wed, Aug 3, 2016 at 5:35 PM, Jack Moffitt <j...@metajack.im> wrote: > >> I asked ekr how much this mattered, and he thought it was important. I >> don't think anyone has pointed me to a documented attack, but it >> definitely seems like the kind of thing that could be done somehow. >> > > I guess I left out an important point: multiple content processes only > improve security if we can reliably ensure that attacking code is never run > in the same process with potential target content. > > That means either really spawning a new process for every origin, at > least, or (for some value of "reliably ensure") separating content based on > some kind of trustworthiness score. > > I would argue that the first isn't really feasible. I think (but might be > mistaken) all browsers start combining tabs after a certain amount to not > gobble up too much memory. In the second case, we might just as well use a > single process for a trustworthiness group right away. > Maybe I'm misunderstanding your point, but it seems like as soon as we agree that we might have >1 content process, then we're largely just talking about the algorithm we use for assignment, and I can think of a number of possible algorithms that might make sense (for instance, if you have a site that's in the browser HPKP list, you group together all its co-domains into a single process). This would (for instance) protect Google apps from everyone else. -Ekr > How we allocate domains to content processes is an open question. It's >> not clear whether we want to segregate high value targets or low value >> targets. But the infrastructure required is the same either way pretty >> much. The only strategy we know won't work is round-robin/random, >> since the attacker could just keep creating domains until they land in >> the right process. >> >> To be clear, I don't think there is very much code complexity here >> over the normal 2 process (chrome + content) solution. We already have >> to have process spawning and IPC. The only thing that changes here is >> code to decide where to spawn new pipelines. >> > > I'm not concerned about code complexity, but about memory usage. Memory > usage in many-tab scenarios is one of the measures where Firefox is still > vastly superior to the competition, and I think we should aim for roughly > matching that. > > >> Implementation wise, we currently spawn a new process per script >> thread. I think we should change this to spawn a single, sandboxed >> content process that contains all the pipelines. Later we can expand >> this once it's more clear how we should allocate pipelines to >> processes. >> >> jack. >> >> On Wed, Aug 3, 2016 at 2:53 AM, Till Schneidereit >> <t...@tillschneidereit.net> wrote: >> > I wonder to which extent this matters. I'm not aware of any real-world >> > instances of the mythical cross-tab information harvesting attack. >> Sure, in >> > theory the malvertising ad from one tab would be able to read >> information >> > from your online banking session. In practice, it seems like attacks >> that >> > gain control of the machine are so much more powerful that that's where >> all >> > the focus is. >> > >> > Additionally, it seems like two content processes, one for normal sites, >> > one for high-security ones (perhaps based on EV certificates), should >> give >> > much of the benefits. Or perhaps an additional one for low-security ones >> > such as ads (perhaps based on tracking blocking lists). >> > >> > On Wed, Aug 3, 2016 at 5:43 AM, Jack Moffitt <j...@metajack.im> wrote: >> > >> >> Each process is a sandboxing boundary. Without security as a concern >> >> you would just have a single process. A huge next step is to have a >> >> second process that all script/layout threads go into. This however >> >> still leaves a bit of attack surface for one script task to attack >> >> another. How many processes you want is a tradeoff of overhead vs. >> >> security. >> >> >> >> So really it should say "more process more security". >> >> >> >> jack. >> >> >> >> On Tue, Aug 2, 2016 at 9:09 PM, Patrick Walton <pwal...@mozilla.com> >> >> wrote: >> >> > It's not a stupid question :) I actually think we should gather all >> >> script >> >> > and layout threads together into one process. Maybe two, one for >> >> > high-security sites and one for all other sites. >> >> > >> >> > Patrick >> >> > >> >> > >> >> > On Aug 2, 2016 6:47 PM, "Paul Rouget" <p...@mozilla.com> wrote: >> >> >> >> >> >> On Tue, Aug 2, 2016 at 6:47 PM, Jack Moffitt <j...@metajack.im> >> wrote: >> >> >> >> First, is multiprocess and sandboxing actively supported? >> >> >> > >> >> >> > I tested this right before the nightly release, and it was working >> >> >> > fine and didn't seem to have bad performance. Note that you can >> run -M >> >> >> > or -M and -S, but not -S by itself (which doesn't make sense). >> Also >> >> >> > note that -M and -S probably don't work on Windows or Android >> >> >> > currently. >> >> >> > >> >> >> >> Is Servo tested with the "-M -S" options? >> >> >> > >> >> >> > We do not have automated testing of these yet. >> >> >> > >> >> >> >> What's the status of the sandbox? >> >> >> > >> >> >> > Should work on Mac and Linux, but hasn't been audited. >> >> >> > >> >> >> >> Is there any reasons for these options to not be turned on by >> >> default? >> >> >> > >> >> >> > They should be, although I think we wanted to fix perf issues >> running >> >> >> > the WPT suite and get all the platforms working first. We should >> >> >> > probably test both configurations. >> >> >> > >> >> >> >> Do we want to enable "-M -S" for browserhtml? Would that help? >> >> >> > >> >> >> > I wanted to have this for the nightly, but didn't have time to >> test. >> >> >> > If it works and has decent performance we can switch to having >> these >> >> >> > be on. >> >> >> > >> >> >> >> I'd like to understand what is not part of the sandboxed content >> >> >> >> process. >> >> >> >> I guess compositor code and anything GPU and window related is >> not >> >> >> >> sandboxed so it runs in the main process. >> >> >> >> How does a sync call to localStorage work in a sandboxed process? >> >> >> >> Where is networking code executed? >> >> >> > >> >> >> > The thing that lives in the extra processes (which are sandboxed) >> are >> >> >> > the script and layout threads. Right now each script/layout thread >> >> >> > gets its own process (and I think any pipeline which shares the >> same >> >> >> > script thread). >> >> >> > >> >> >> > Eventually we'll want to have each extra process contain some >> number >> >> >> > of pipelines. So that is script+layout but for arbitrary numbers >> of >> >> >> > domains. >> >> >> >> >> >> In your slides, you say "more process more better". >> >> >> That might be a stupid question, but why? >> >> >> Because of the nature of Servo, can't we just gather all the >> >> >> script+layout threads into one single sandboxed process? >> >> >> >> >> >> > The constellation, networking, graphics, etc all live in the root >> >> >> > process which has privileges. >> >> >> > >> >> >> > >> >> >> >> I'm trying to understand the relation between a constellation, >> >> iframes >> >> >> >> and a sandboxed process. I would naively expect to have one >> process >> >> >> >> per constellation, but apparently, it's one process per iframe. >> If >> >> I'm >> >> >> >> not mistaken, today in browserhtml, we have only one >> constellation. I >> >> >> >> imagine in the future there would be one sandboxed process per >> >> >> >> constellation, one constellation per group of tabs of the same >> >> domain, >> >> >> >> and one constellation for browserhtml. >> >> >> > >> >> >> > There is only one constellation. A constellation owns a set of >> >> >> > pipelines which then form a tree of pipelines. It is only these >> >> >> > pipelines that live outside the main process. >> >> >> >> >> >> Would there be any advantage of having one constellation per tab? >> >> >> Can't a constellation fail? Would it be more robust to have multiple >> >> >> constellations? >> >> >> >> >> >> I've read somewhere that a constellation should be seen as the set >> of >> >> >> pipelines per tab. >> >> >> >> >> >> But maybe it's a different story with browserhtml because what would >> >> >> hold the tabs/constellations would be a pipeline, so at the end, >> it's >> >> >> just doesn't make sense to have multiple constellations. >> >> >> >> >> >> Asking because if multiple constellation is better and if that's we >> >> >> eventually want to do, we need to rethink bhtml architecture. >> >> >> >> >> >> > Eventually we'll probably experiment with where resource caching >> >> >> > threads and such go. >> >> >> > >> >> >> > Here's a link to the deck I presented in London which has pretty >> >> >> > pictures of what the design should be: >> >> >> > >> >> >> > >> >> >> https://docs.google.com/presentation/d/1ht96DBAynx7dbL2taDAzNHs78QWeKvyzrVV1O-cDQLQ/edit?usp=sharing >> >> >> > >> >> >> > jack. >> >> >> _______________________________________________ >> >> >> dev-servo mailing list >> >> >> dev-servo@lists.mozilla.org >> >> >> https://lists.mozilla.org/listinfo/dev-servo >> >> _______________________________________________ >> >> dev-servo mailing list >> >> dev-servo@lists.mozilla.org >> >> https://lists.mozilla.org/listinfo/dev-servo >> >> >> > _______________________________________________ >> > dev-servo mailing list >> > dev-servo@lists.mozilla.org >> > https://lists.mozilla.org/listinfo/dev-servo >> _______________________________________________ >> dev-servo mailing list >> dev-servo@lists.mozilla.org >> https://lists.mozilla.org/listinfo/dev-servo >> > > _______________________________________________ dev-servo mailing list dev-servo@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-servo