My one-sentence summary of the article - If anything, the test cohort with the GPU process saw improved stability, especially for graphics crashes.
Which is awesome! May need error bars for the figures to see how much of the results you saw we might have to attribute to the noise of reported crash figures. Also, I can't quite tell what the units are. The initial graphs seem to be #crashes-reported-to-Socorro, but later ones talk of #crashes-per-1000-usage-hours. The axes aren't labelled, so it's difficult to be precise. Have you attempted to measure crash counts and types via Telemetry instead of Socorro? If all you need is a count and some metadata, the submission rate to Telemetry is much (25x) higher than Socorro. (actually, if all you need is a count, crash_aggregates would be a good place to start, as most of the counting has been done for you). All in all, very interesting and I can't wait to see future experiments in Aurora and on Beta, when the time is right. :chutten On Mon, Jan 16, 2017 at 4:11 PM, Anthony Hughes <ahug...@mozilla.com> wrote: > Hello Platform folks, > > Over the Christmas break I rolled out a Telemetry Experiment on Nightly to > measure the stability impact of the GPU Process. This experiment concluded > on January 11. Having had time to analyze the data I've published a report > on my blog: > https://ashughes.com/?p=374 > > It should come up on Planet shortly but I wanted to post here for increased > visibility. Feel free to send me questions, comments, and feedback. > > Cheers > > -- > Anthony Hughes > Senior Quality Engineer > Mozilla Corporation > _______________________________________________ > dev-platform mailing list > dev-platform@lists.mozilla.org > https://lists.mozilla.org/listinfo/dev-platform > _______________________________________________ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform