On Fri, Sep 5, 2014 at 8:55 AM, James Graham <ja...@hoppipolla.co.uk> wrote:
> The web-platform-tests testsuite has just landed on
> Mozilla-Central. It is an import of a testsuite collated by the W3C
> [1], which we intend to keep up-to-date with upstream. The tests are
> located in /testing/web-platform/tests/ and are now running in automation.
>
> Initially the testsuite, excluding the reftests, is running on Linux
> 64 opt builds only. If it doesn't cause problems there it will be
> rolled out to other configurations, once we are confident they will
> be equally stable.
>
> The jobs indicated on tbpl and treeherder by the symbols W1-W4. The
> reftests will be Wr once they are enabled.
>
> == How does this affect me? ==
>
> Because web-platform-tests is imported from upstream we can't make
> assumptions like "all tests will pass". Instead we explicitly store
> the expected result of every test that doesn't just pass in an
> ini-like file with the same name as the test and a .ini suffix in
> /testing/web-platform/meta/. If you make a change that affects the
> result of a web-platform-test you need to update the expected results
> or the testsuite will go orange.
>
> Instructions for performing the updates are in the README file
> [2]. There is tooling available to help in the update process.
>
> == OK, so how do I run the tests? ==
>
> Locally, using mach:
>
> mach web-platform-tests
>
> or, to run only a subset of tests:
>
> mach web-platform-tests --include=dom/
>
> To run multiple tests at once (at the expense of undefined ordering
> and greater in-determinism), use the --processes=N option.
>
> The tests are also available on Try; the trychooser syntax is
>
> -u web-platform-tests
>
> Individual chunks can also be run, much like for mochitest.
>
> It's also possible to just start the web server and load tests into
> the browser, as long as you add the appropriates entries to your hosts
> file. These are documented in the web-platform-tests README file
> [3]. Once these are added running
>
> python serve.py
>
> in testing/web-platform/tests will start the server and allow the
> tests to be loaded from http://web-platform.test:8000.
>
> == What does it mean if the tests are green? ==
>
> It means that there are no "unexpected" results. These expectations
> are set based on the existing behaviour of the browser. Every time the
> tests are updated the expectations will be updated to account for
> changes in the tests. It does *not* mean that there are no tests that
> fail. Indeed there may be tests that have even worse behaviour like
> hanging or crashing; as long as the behaviour is stable, the test will
> remain enabled (this can ocassionally have somewhat wonky interaction
> with the tbpl UI. When looking at jobs, unexpected results always start
> TEST-UNEXPECTED-).
>
> So far I haven't spent any time filing bugs about issues found by the
> tests, but there is a very basic report showing those that didn't pass
> at [4]. I am very happy to work with people with some insight into
> what bugs have already been filed to get new issues into Bugzilla. I
> will also look at making a continually updated HTML report. In the
> longer term I am hopeful that this kind of reporting can become part
> of the Treeherder UI so it's easy to see not just where we have
> unexpected results but also where there are expected failures
> indicating buggy code.
>
> == What kinds of things are covered by these tests? ==
>
> web-platform-tests is, in theory, open to any tests for web
> technologies. In practice most of the tests cover technologies in the
> WHATWG/W3C stable e.g. HTML, DOM, various WebApps specs, and so
> on. The notable omission is CSS; for historical reasons the CSS tests
> are still in their own repository. Convergence here is a goal for the
> future.
>
> == We already have mochitests; why are we adding a new testsuite? ==
>
> Unlike mochitests, web-platform-tests are designed to work in any
> browser. This means that they aren't just useful for avoiding
> regressions in Gecko, but also for improving cross-browser interop;
> when developing features we can run tests that other implementers have
> written, and they can run tests we have written. This will allow us to
> detect compatibility problems early in a feature's life-cycle, before
> they have the chance to become a source of frustration for
> authors. With poor browser compatibility being one of the main
> complaints about developing for the web, improvements in this area are
> critical for the ongoing success of the platform.
>
> == So who else is running the web-platform-tests? ==
>
> * Blink run some of the tests in CI ([5] and various other locations
>   scattered though their tree)
> * The Servo project are running all the tests for spec areas they have
>   implemented in CI [6]
> * Microsoft have an Internet-Explorer compatible version of the test runner.
>
> In addition we are using web-platform-tests as one component of the
> FirefoxOS certification suite.
>
> The harness [7] we are using for testing Gecko is browser-agnostic so
> it's possible to experiment with running tests in other browsers. In
> particular it supports Firefox OS, Servo and Chrome, and Microsoft
> have patches to support IE. Adding support for other browsers
> supporting some sort of remote-control protocol (e.g. WebDriver)
> should be straigtforward.
>
> == Does this mean I should be writing web-platform-tests ==
>
> Yes.
>
> When we are implementing web technologies, writing cross-browser tests
> is generally better than writing proprietary tests. Having tests that
> multiple vendors run helps advance the mission, by providing a
> concrete way of assessing spec conformance and improving interop. It
> also provides short term wins since we will discover compatibility
> issues closer to the time that the code is originally written, rather
> than having to investigate broken sites later on. This also applies to
> other vendors of course; by encouraging them to run tests that we have
> written they are less likely to introduce bugs that manifest as
> compatibility issues which, in the worst case, lead to us having to
> "fix" our implementation to match their mistakes.
>
> But.
>
> At the moment, the process for interacting with web-platform-tests
> requires direct submission to the upstream GitHub repository. In the
> near future this workflow will be improved by adding a directory for
> local modifications or additions to web-platform-tests in the Mozilla
> tree (e.g. testing/web-platform/local). Once landed in m-c any tests
> here will automatically be pushed upstream during the next
> web-platform-tests sync (as long as the test has r+ in Bugzilla it
> doesn't need to be reviewed again to land upstream). This, combined
> with the more limited featureset and platform coverage of
> web-platform-tests compared to mochitest, means that this email is
> explicitly *not* a call to change any policy around test formats at this
> time.
>
> == I'm feeling virtuous! Where's the documentation for writing tests? ==
>
> The main documentation is at Test The Web Forward [8]. I am in the
> process of updating this to be more current; for now the most up to
> date documentation is in my fork of the website at [9]. This will be
> merged upstream in the near future.
>
> For tests that require server-side logic web-platform-tests uses a
> custom python-based server which allows test-specific behaviour
> through simple .py files. Documentation for this is found at [10].
>
> If you have any questions, feel free to ask me.
>
> == How do I write tests that require non-web-exposed APIs? ==
>
> One of the disadvantages of cross-browser testing is that you are
> limited to APIs that work in multiple browsers. This means that tests
> in web-platform-tests can't use e.g. SpecialPowers. For anything
> requiring this you will have to write a mochitest like today.
>
> In the future we plan to integrate WebDriver support into
> web-platform-tests which will make some privileged operations, and
> simulation of user interaction with the content area, possible.
>
> == You didn't answer my question! ==
>
> If you have any further questions I'm very happy to answer them,
> either here, by email or on irc (#ateam on the Mozilla server or
> #testing on irc.mozilla.org).
>
> [1] https://github.com/w3c/web-platform-tests/
> [2]
> https://hg.mozilla.org/mozilla-central/file/tip/testing/web-platform/README.md
> (or formatted
> https://github.com/mozilla/gecko-dev/blob/master/testing/web-platform/README.md)
> [3]
> https://github.com/mozilla/gecko-dev/blob/master/testing/web-platform/tests/README.md
> [4]
> http://hoppipolla.co.uk/web-platform-tests/gecko_failures_2014-08-28.html
> [5]
> https://code.google.com/p/chromium/codesearch#chromium/src/third_party/WebKit/LayoutTests/w3c/web-platform-tests/
> [6] https://travis-ci.org/servo/servo/ (see the AFTER_BUILD=wpt jobs)
> [7] http://wptrunner.readthedocs.org/en/latest/
> [8] http://testthewebforward.org
> [9] http://jgraham.github.io/docs/
> [10] http://wptserve.readthedocs.org/en/latest/
> _______________________________________________
> dev-platform mailing list
> dev-platform@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

This is awesome.  Great work.

- Kyle
_______________________________________________
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Reply via email to