> There are extremely non-stable Talos tests, and relatively stable ones.
> Let's focus on the relatively stable ones.

It's not exclusively a question of noise in the tests.  Even
regressions in stable tests are sometimes hard to track down.  I spent
two months trying to figure out why I could not reproduce a Dromaeo
regression I saw on m-i using try, and eventually gave up (bug
653961).

It's great if we try to track down this mysterious 5% startup
regression.  We shouldn't ignore important regressions.  But what I
object to is the idea that if I regress Dromaeo DOM by 2%, I'm
automatically backed out and prevented from doing any work until I
prove that the problem is I changed filename somewhere.

On Fri, Aug 31, 2012 at 12:32 PM, Ehsan Akhgari <ehsan.akhg...@gmail.com> wrote:
> On 12-08-31 6:01 AM, Justin Lebar wrote:
>>
>> I'm not saying it should be OK to regress our performance tests, as a
>> rule.  But I think we need to acknowledge that hunting regressions can
>> be time-consuming, and that a policy requiring that all regressions be
>> understood may hamstring our ability to get anything else done.
>> There's a trade-off here that we seem to be ignoring.
>
>
> There is definitely a trade-off here, and at least for the past year (and
> maybe for the past two years) we have in practice been weighing on the side
> of the difficulty of tracking down performance regression to the point that
> we've been ignoring them (except for perhaps a few people.)
>
> It is a mistake to take Rafael's example and extend it to the average
> regression that we measure on Talos.  It's true that sometimes those things
> happen, and in practice we cannot deal with them all, because we don't have
> an army of Rafaels.  But it bothers me when people take an example of a very
> difficult to understand regression encountered by a person who bravely
> dwells with low-level compiler code generation stuff and extend it to come
> up with a policy covering all regressions. Please, let's not do that.
>
> And let's remember the other side of the trade-off too.  A lot of blood and
> tears has gone into shaving off milliseconds from our startup time.  Taking
> a ~5% hit on startup time within a 6-week cycle effectively means that we
> have undone man-months of optimizations which have happened to the startup
> time.  So it's not like letting these regressions in beneath our noses is
> going to make us all more productive.
>
> There are extremely non-stable Talos tests, and relatively stable ones.
> Let's focus on the relatively stable ones.  There are extremely hard to
> diagnose performance regressions, and extremely easy ones (i.e., let's not
> wait on this lock, do this I/O, run this exponential algorithm, load tons of
> XUL/XBL when a window opens, etc.)  We have many great tools for the job, so
> not all regressions need to be treated the same.
>
> Cheers,
> Ehsan
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to