On Mon, Mar 21, 2016 at 3:50 PM, Nicholas Nethercote
wrote:
>
> - Heap overhead is significant. Reducing the page-cache size could save a
> couple of MiBs. Improvements beyond that are hard. Turning on jemalloc4
> *might* help a bit, but I wouldn't bank on it, and there are other
> complicat
On Tue, Mar 15, 2016 at 2:34 PM, Nicholas Nethercote
wrote:
>
>
-
> Conclusion
>
-
>
> The overhead per content process is significant. I can see
I filed bug 876173[1] about this a long time ago. Recently, I talked to
Gabor, who's started looking into enabling multiple content processes.
One other thing we should be able to do is sharing the self-hosting
compartment as we do between runtimes within a process. It's not that big,
but it's not
On 03/17/2016 08:05 AM, Thinker Li wrote:
On Wednesday, March 16, 2016 at 10:22:40 PM UTC+8, Nicholas Nethercote wrote:
Even if we can fix that, it's just a lot of JS code. We can lazily import
JSMs; I wonder if we are failing to do that as much as we could, i.e. are
all these modules r
On Wednesday, March 16, 2016 at 10:22:40 PM UTC+8, Nicholas Nethercote wrote:
> Even if we can fix that, it's just a lot of JS code. We can lazily import
> JSMs; I wonder if we are failing to do that as much as we could, i.e. are
> all these modules really needed at start-up? It would be grea
On 15/03/2016 04:34, Nicholas Nethercote wrote:
> - "heap-overhead" is 4 MiB per process. I've looked at this closely.
> The numbers tend to be noisy.
>
> - "page-cache" is pages that jemalloc holds onto for fast recycling. It is
> capped at 4 MiB per process and we can reduce that with a
I seem to remember that our ChromeWorkers (SessionWorker,
PageThumbsWorker, OS.File Worker) were pretty memory-hungry, but I don't
see any workers there. Does this mean that they have negligible overhead
or that they are only in the parent process?
Cheers,
David
On 15/03/16 04:34, Nicholas Nethe
On Thu, Mar 17, 2016 at 9:50 AM, Nicolas B. Pierron <
nicolas.b.pier...@mozilla.com> wrote:
> Source compressions should already be enabled. I think we do not do it
> for small sources, and for Huge sources, as the compression would either be
> useless, or it would take a noticeable amount of tim
On Fri, Mar 18, 2016 at 2:29 AM, David Rajchenbach-Teller <
dtel...@mozilla.com> wrote:
>
> I seem to remember that our ChromeWorkers (SessionWorker,
> PageThumbsWorker, OS.File Worker) were pretty memory-hungry, but I don't
> see any workers there. Does this mean that they have negligible overhead
On 3/17/16 9:50 AM, Nicolas B. Pierron wrote:
Note, this worked on B2G, but this would not work for Gecko. For example
all tabs addons have to use toSource to patch the JS functions.
Note that we do have the capability to lazily load the source from disk
when someone does this, and we do use i
Greetings,
erahm recently wrote a nice blog post with measurements showing the
overhead of
enabling multiple content processes:
http://www.erahm.org/2016/02/11/memory-usage-of-firefox-with-e10s-enabled/
The overhead is high -- 8 content processes *doubles* our physical memory
usage -- which limi
11 matches
Mail list logo