On Fri, Nov 8, 2013 at 2:01 AM, L. David Baron <dba...@dbaron.org> wrote:
> I think this depends on what you mean by "known intermittent
> failures".  If a known intermittent failure is the result of any
> regression that leads to a previously-passing test failing
> intermittently, I'd be pretty uncomfortable with this.  There have
> been quite a few JS engine changes that led to style system
> mochitests failing intermittently; I wouldn't want all of the style
> system's test coverage to be progressively turned off as a result.
> But if you're talking about new tests that aren't yet passing
> reliably, or other cases where the module owner of the test
> recognizes that the regression is acceptable, then that seems ok.
>
> We need to get better about identifying and backing out changes that
> cause previously-passing tests to start failing intermittently.
> This requires better tools for doing it.

Agreed, but this is orthogonal to what I was saying.  I was referring
to cases where we have an intermittent failing test that we keep as-is
for whatever reason, such as it's a new test or we don't know what
commit caused it to fail.  Of course, if we can identify the commit
that caused it to start failing intermittently, we should do so and
back it out, as with any regression.  A simple way to identify the
problematic commit would be to hg bisect and run the test enough times
to hit the failure more or less reliably.  This could be automated
fairly easily in theory.

As for new tests, someone (James Graham?) once told me that Opera runs
every newly-added test a hundred times or something to verify that
it's reliable.  Also relatively simple in theory, if only our test
runners knew what a "newly-added test" was and how to rerun a single
test.
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to