On Mon, 27 Jul 2015 21:35:20 +1200, Karl Tomlinson wrote:

> Sometimes it would be nice to check in crashtests that use, or
> attempt to use large memory allocations, but I'm concerned that
> checking in these crashtests could disrupt subsequent tests
> because there is then not enough memory to test what they want to
> test.

Following up here, mainly just to report back on findings, for
anyone considering the same in the future.

I ended up backing out the test that I wanted to add [1] and I
think the main blocker is that at least some platforms overcommit
memory.  Large allocations can succeed, but the lack of memory is
not detected until the memory is actually used.  When the memory
is used, a process is killed, and AFAIK there is no guarantee that
it is the process using the new memory.

Even if we configured platforms to not overcommit, I'd still be
uncomfortable adding a test that may force everything to swap to
disk.

It seems that fallible memory allocation is only a complete
solution when constrained by address space, but perhaps we could
run our tests with process data segment size constraints tuned
for the hardware environment.

The dummy unload event listener is helpful to avoid keeping things
alive in BF cache, but AFAIK SimpleTest.forceGC/CC() (or similar
DOMWindowUtils methods) and carefully removing all references is
the only way to ensure that memory is released immediately.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=999376#c12
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to