Aryeh Gregor wrote:

Currently we only check that no test fails that's not on the per-file whitelist 
of expected fails, and in practice that works fine for us. If we wanted to be 
pickier, we could list all expected results, both pass and fail, and verify 
that the lists exactly match.  This is unpleasant in practice because some of 
the test files I wrote run tens of thousands of tests, which leads to JSON 
files quite a few megabytes in size that have to be loaded to run the tests.

Why not simply verify that the list of actual fails equals the list of expected fails, and report items that are only in one of the two lists?

--
Warning: May contain traces of nuts.
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to