> 1) something checked into mc anyone can easily author or run (for tracking 
> down regressions) without having to checkout a separate repo, or setup and 
> run a custom perf test framework.

I don't oppose the gist of what you're suggesting here, but please
keep in mind that small perf changes are often very difficult to track
down locally.  Small changes in system and toolchain configuration can
have large effects on average build speed and its variance.  For
example, I've found observable performance differences between Try and
m-c/m-i builds in the past (bug 653961), despite their build configs
being nearly identical.

In my experience, we spend the majority of our time trying to track
down small perf changes, so a change which makes it easier to track
down the source of large perf changes might not have an outsize
effect.

> 3) no releng overhead for setup of new perf tests. something that is built 
> into the test framework / infrastructure we set up.

If we did this, we'd need to figure out how and when to promote
benchmarks to "we care about them" status.

We already don't back back out changes for regressing a benchmark like
we back them  out for regressing tests.  I think this is at least
partially because a general sentiment that not all of our benchmarks
correlate strongly to what they're trying to measure.

I suspect if anyone could check in a benchmark, the average quality of
benchmarks would likely stay roughly the same, but the number of
benchmarks would increase.  In that case we'd have even more
benchmarks with spurious regressions to deal with.

-Justin
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to