Hi folks, Recently we've started writing some platform tests that exercise content processes for B2G, and as you can probably imagine those tests are somewhat fragile.
I think it's safe to say that no one likes randomly failing tests. Our extremely overworked and underpaid sheriffs and volunteers hate them because they have to spend a bunch of time trying to understand them, file bugs about them, star them, and back people out because of them. As a developer, I hate seeing my own tests fail because it means I didn't think around every corner. I hate it when other people's tests fail because I don't know if it's safe to land my next changes. All in all it's pretty lame. However, I firmly believe that disabling a randomly-failing test is a decision that should be made by the module owners and peers. Perhaps it's only a recent development but I've seen my OOP test suite for IndexedDB disabled twice in the last week without anyone consulting the module owner or peers. There's no real way that any of our sheriffs or volunteers can know all the ramifications of disabling a particular test (and it shouldn't be their responsibility to find out - nobody has that much time!). The solution is simple though: request review from a module peer before disabling a test. Peers can make the decision if it's worth disabling a test, pull someone off of other tasks to fix it, or whatever. Disabling a test without a peer's input and then leaving open an unassigned bug to re-enable it is a pretty good way to leave the test disabled forever. Any thoughts or objections? -Ben _______________________________________________ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform