On Monday, March 4, 2013 4:56:46 PM UTC-8, Jeff Hammel wrote: > I'll point out and really this is about all I have to say on this thread > that while perf testing (that is, recording results) may be....well, not > easy, but not too awful that rigorous analysis of what the data means > and if there is a regression is often hard since it is often the case, > as evidenced by Talos, that distributions are non-normal and may be > multi-modal. While I have no love of Talos, despite/because of sinking a > year's worth of effort into it, I fear that any replacement will be done > with a loss of all wisdom harvested from legacy, and then relearned. If > each team is responsible for perf testing, without a common basis and > understanding of the stats analysis problem, I fear this will just > multiply the problem. Frankly, one of the problems I've seen time and > time again is the duplication of effort around a problem (which isn't a > bad thing except...) and a lack of consolidation towards a > (moz-)universal solution.
Those are real issues, but do you really think they are so serious? AWFY seems to do the job, and the JS team is happy with it, certainly happier with it than any other JS perf testing system we've had. One thing to note about it is that it doesn't have any automatic alarms or other actions. It's fed into human judgment only, so no statistical model is required. On the general subject of having perf tests collected under one banner or distributed, the experience so far seems pretty clear that tests designed in a distributed way are much more successful for serving the purpose. I'm not convinced that most of these systems really need advanced treatment to be useful. But if it would help, maybe it would be good to set up some kind of "perf testing group" that could meet from time to time and exchange knowledge? Dave _______________________________________________ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform