On Sat, Nov 10, 2012 at 9:41 AM, Boris Zbarsky <bzbar...@mit.edu> wrote:
> I believe right now we have a list of "known failures" alongside such tests,
> and our own test harness knows to compare what the tests are reporting to
> our list of known failures.  As in, we're not using the pass/fail state of
> the tests directly; we're comparing it to "should all pass, except this
> whitelist of things that we know fail".
>
> Constructing these whitelists of known failures is indeed a bit of a PITA,
> but they're pretty static until we fix stuff, usually.

Yes, exactly.  And they're quite easy to construct -- there's a script
these days (parseFailures.py) that you run on the output of the
mochitests, and it creates all the directories and files for you.

The code for our testharness.js wrapping is here:

http://hg.mozilla.org/mozilla-central/file/ea5c4c1b0edf/dom/imptests

See the README.  To import a new test suite, all you have to do is add
a line to a file (or a new file) specifying the location of the test
suite and the directories that are wanted, run the importTestsuite.py
script, run the test suite to get a list of known failures, and use
parseFailures.py on the result to generate appropriate JSON files in
the "failures" directory.  It's only a minor hassle.

Currently we only check that no test fails that's not on the per-file
whitelist of expected fails, and in practice that works fine for us.
If we wanted to be pickier, we could list all expected results, both
pass and fail, and verify that the lists exactly match.  This is
unpleasant in practice because some of the test files I wrote run tens
of thousands of tests, which leads to JSON files quite a few megabytes
in size that have to be loaded to run the tests.  Since in most files
we pass all or almost all tests, storing only failures is a very
effective optimization.

It's certainly true that if a file threw an exception at file scope,
it would make the test useless.  However, if we can't change the file,
there could be all kinds of things about it that are broken anyway.
For instance, it could wrap many unrelated things in a single test(),
and then you have exactly the same problems.  For the time being, any
tests we import are tests we can change -- they're all from the W3C
and most are written by Mozilla contributors.  So if it doesn't play
nicely for our system, we can always change it.  I expect that to
continue to be the case.  If an odd file breaks and we can't change it
for whatever reason, we can live without the test coverage.  It's not
a problem we have in practice.

So I still don't see the value in the test/assert model used by
testharness.js, as opposed to having everything be one big test but
asserts be non-fatal.  In particular, what value does test() provide
that couldn't be provided about as well by try/catch blocks, aside
from aesthetic preferences about grouping?
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to