On 2/3/14 12:36 PM, Gregory Szorc wrote:
Also, I believe chrome JS is never JIT compiled, so not getting JIT benefits with generators doesn't seem like a concern?
Chrome JS running in a window context (<script> elements from XUL documents and overlays) is currently baseline compiled but not ion compiled.
I have the (possibly incorrect) perception that JS <-> SpiderMonkey C++ is more efficient than JS <-> other C++ (including DOM code and especially XPCOM code).
Ah, ok. JS -> SpiderMonkey C++ is definitely faster than JS -> XPCOM C++, by a factor of 20-100 or so depending on what you're doing.
JS -> DOM C++ is about the same as JS -> SpiderMonkey C++. Faster in some cases if we're doing it from inside ion code (not that this is relevant in chrome, per above), because the DOM provides a lot more information about the behavior of its functions to the JIT than SpiderMonkey does. Not least because anything SpiderMonkey really _really_ wants to be fast it inlines in jitcode instead of doing a C++ stub call. The DOM doesn't quite have that luxury, though we do in fact have the capability for some simple inlining like that for DOM getters. But not methods.
C++ -> JS is probably a bit faster in SpiderMonkey than in DOM code; see https://bugzilla.mozilla.org/show_bug.cgi?id=840201 but note that the DOM end has been optimized more since the last comment there. That said, the remaining cxpusher overhead remains; it'll hopefully go away in the next few months. At that point the two will be pretty comparable. Right now I'd guess DOM C++ -> JS is about 2x slower than SpiderMonkey C++ -> JS.
XPCOM C++ -> JS is definitely slower than DOM C++ -> JS, probably by a factor of 2-3 or so.
I'd love, love, love a brownbag or similar training materials for JS developer education here.
That might be worth doing now that we're not changing this stuff all the time, yeah. Though note the caveats about Ion compilation and whatnot above. :( Giving one-size-fits-all performance advice is hard in our current chrome JIT environment.
I'm worried about pretty much everything. Lots of Firefox features are now using promises over callbacks for their APIs. I worry about the explosion of promise usage contributing to a performance problem. I don't think we'd want to make that worse via excessive C++ bridging.
OK. The best way to address that worry is probably a stress-test of some sort that we can then measure, profile, and use as a basis for optimizations as needed.... Even if we're doubling the overhead of promises but the overhead was only 0.5% of the workload to start with, it might not be the bottleneck. If it was 20% of the workload, that's a totally different story.
I'd be really interested in a worst-case (the JS side mostly doing nothing) stress test of promises here.
My personal suspicion is that the fact that DOM promises post a runnable to the event loop for each promise will be a much bigger problem in practice than the performance of callbacks...
-Boris _______________________________________________ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform