Hi Gabor! Thanks for starting this thread.

On 2016-03-10 5:07 AM, Gabor Krizsanits wrote:
While the other thread about fuzzing friendly Gecko is an interesting
option I would like to go back to the original topic, and start another
thread to collect other ideas too, that might help getting better on the
performance front. Here are some of my thoughts after spending some time
with the profiler and Talos tests in the past couple of weeks.

Probably most regression happens where we don't detect them because of the
lack of perf. test coverage. It should be easy and straightforward to add a
new Talos test (it isn't right now). There is an ongoing work on this I
think but don't know where is that work being tracked. We clearly need more
tests. A lot more. Especially if we want to ship features with huge impact
like multi-process Firefox or removing XUL. I don't think we have all the
metrics we need yet to make the best decisions.

Yes, this is now a lot easier now that we don't have to configure graphserver every time we add a unit test (Perfherder, as opposed to Graphserver, is smart enough to basically be able to handle anything new that people care to submit to it). In fact, :mconley just added a new test (tabpaint) last week, with no modifications necessary to Perfherder.

We (mainly meaning jmaher and myself, the Talos/Perfherder maintainers) haven't really emphasized/encouraging adding new tests in the past, as there was a feeling that we were having difficulty just staying on top of the existing tests that we had. Now that we have a better system for sheriffing regressions (as well as a system to seperate tests into different "buckets" so the burden of this can be shared amongst a larger group of people), it may well be time to consider adding new benchmarks.

Another thing to note is that new tests don't even need to be part of talos. We are now capable of accepting data from *any* job that is ingested by treeherder. For example, gbrown added a new Android-specific memory test as part of the mochitest-browser-chrome suite a couple weeks back:

https://gbrownmozilla.wordpress.com/2016/01/27/test_awsy_lite/

We do have some explanations about each Talos tests at
https://wiki.mozilla.org/Buildbot/Talos/Tests and I'm thankful for that but
some of the tests need more explanation, and some of them does not have
any. We could further improve that, it will save a lot of engineering time
(this wiki rocks by the way).

In the past, I've found needinfo'ing the test owner helpful for this sort of thing.

Add-ons. Last number I heard is that 40% of our users using some Add-ons,
we have access to these Add-ons code yet we don't have any performance
tests using them. It should be our responsibility to make sure if we
regress the user experience of our users with some of the most popular
Add-ons, we at least give a heads up to the authors and help them to
address the problem. I know resources are limited but maybe there are some
low hanging fruit here that would make a huge impact.

If I remember correctly, there were some efforts to run the standard set of Talos tests with addons enabled, but I don't think it was particularly successful. In general, I'm not crazy about creating too many variants of the existing tests will just lead to a firehose of information that will be difficult to manage.

I suspect creating a microbenchmark measuring some of the common internal operations an addon might want to do may be a better approach, though I could be wrong.

Will

_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to