t; > > > > > > I attempt to build Firefox for Openindiana (Illumos). Some of
> > > > > > > > the tests
> > > > > > > > are failing:
> > > > > > > > Result summary:
> > > >
> > > > > > wrote:
> > > > > >
> > > > > > > I attempt to build Firefox for Openindiana (Illumos). Some of the
> > > > > > > tests
> > > > > > > are failing:
> > > > > > > Result
fox for Openindiana (Illumos). Some of the
> > > > > > tests
> > > > > > are failing:
> > > > > > Result summary:
> > > > > > Passed: 30561
> > > > > > Failed: 480
> > > > > >
> > > >
t; > > Passed: 30561
> > > > > Failed: 480
> > > > >
> > > > > How can I check what the reason for a failed check is?
> > > > >
> > > > > In the log I see for example:
> > > > > TEST-PASS | js/src/jit-test/t
x for Openindiana (Illumos). Some of the tests
> > > > are failing:
> > > > Result summary:
> > > > Passed: 30561
> > > > Failed: 480
> > > >
> > > > How can I check what the reason for a failed check is?
> > > >
gt; > > Failed: 480
> > >
> > > How can I check what the reason for a failed check is?
> > >
> > > In the log I see for example:
> > > TEST-PASS | js/src/jit-test/tests/wasm/timeout/debug-interrupt-2.js |
> > > Success (code 6, args &
rote:
>
> > I attempt to build Firefox for Openindiana (Illumos). Some of the tests
> > are failing:
> > Result summary:
> > Passed: 30561
> > Failed: 480
> >
> > How can I check what the reason for a failed check is?
> >
> > In the lo
ailing:
> Result summary:
> Passed: 30561
> Failed: 480
>
> How can I check what the reason for a failed check is?
>
> In the log I see for example:
> TEST-PASS | js/src/jit-test/tests/wasm/timeout/debug-interrupt-2.js |
> Success (code 6, args "--ion-eager --ion-offth
I attempt to build Firefox for Openindiana (Illumos). Some of the tests are
failing:
Result summary:
Passed: 30561
Failed: 480
How can I check what the reason for a failed check is?
In the log I see for example:
TEST-PASS | js/src/jit-test/tests/wasm/timeout/debug-interrupt-2.js |
Success
Do you write Gecko/Firefox patches or testcases, or monitor the Mozilla
trees?
If so, you may run into new intermittent test failures due to a recent
(intentional) behavior change.
On 2020-12-31, a patch landed in
https://bugzilla.mozilla.org/show_bug.cgi?id=1676966 that changed how
font
If you never use `mach python-test`, you can stop reading now.
At the latest version of mozilla-central, `mach python-test` no longer takes a
`--python` command-line argument. If you're used to using it, you have
alternatives:
* If you're used to doing `./mach python-test --python 3
On 7/29/20 6:28 AM, James Graham wrote:
As an aside/reminder for people, for web-platform-tests in particular
the dashboard at https://jgraham.github.io/wptdash/ can give you
information about all tests in a component and is designed to answer
questions like "which tests are failing in Firefox
test's
mochitest.ini-style files!
- What WPT disabled patterns govern the skips from your WPT test's meta
files!
- How many WPT subtests defy expectations!
- The wpt.fyi URL of the test you're looking at... by clicking on the
"Web Platform Tests Dashboard" link in the "N
Have you ever been looking at a test file in Searchfox and wondered how
the test could possibly work? Then you do some more investigation and
it turns out that the test, in fact, does not work and is disabled?
Perhaps you even melodramatically threw your hands up in the air in
frustration
hes can now precisely schedule relevant
> manifests, catching more regressions with fewer resources.*
>
> I'm happy to announce an important new capability that has been added to
> our CI infrastructure: scheduling specific test manifests.
>
> We've been building
*tl;dr**: Your |mach try auto| pushes can now precisely schedule relevant
manifests, catching more regressions with fewer resources.*
I'm happy to announce an important new capability that has been added to
our CI infrastructure: scheduling specific test manifests.
We've been building
On the next merge to mozilla-central, the `mach test --headless` command
will no longer give an error on directories containing xpcshell tests
<https://bugzilla.mozilla.org/show_bug.cgi?id=1550518> (along with other
suites already supporting --headless). Prior to this enhancement the
command
Because some certificates stored in the source code directory needed
renewal, pushes to Try based on older revisions than the latest commit
("tip") for all mozilla-* repositories and autoland will yield many test
failures.
Please rebase your patch stack before pushing to Try.
This is a great summary, and reflects a ton of hard work over the past year
to improve our mobile testing story and reduce our CI spend. Thanks gbrown
and everyone else who helped make it happen!
On Fri, Oct 18, 2019 at 11:52 AM Geoffrey Brown wrote:
> The Android test environments used
The Android test environments used for continuous integration have been
through many changes over the last year or two; here's a review of what we
have today. [1]
Most of our Android tests run on emulators. Some run on hardware: real
phones.
Our Android hardware tests run on physical de
Hi all,
On Windows, as part of our work to move GPU access and win32k system calls out
of the content process, we are moving accelerated Canvas 2D to the GPU process.
I am nearly ready to enable this by default on Nightly and would really
appreciate it if people running Nightly on Windows would
tl;dr We now have a --verbose-if-fails option that outputs a log if a
xpcshell-test fails when being run in parallel.
If you ever have seen output from xpcshell-tests like this:
0:05.68 TEST_START:
toolkit/components/search/tests/xpcshell/test_remove_profile_engine.js
0:08.57 TEST_END: FAIL
TLDR: If you run Android hardware tests, please update your trees.
We've recently migrated the Android hardware testing infrastructure to
use a new type of taskcluster worker. There are changes in tree that
point at the new queues and workers. Because we have limited capacity
in our cluster, we ha
tl;dr I've just landed bug 1415265
<https://bugzilla.mozilla.org/show_bug.cgi?id=1415265> which means you
no longer need .eslintrc.js files extending the configuration in
commonly-named directories.
Since early in the ESLint days, we've needed extra configuration fi
On Tuesday, March 5, 2019 at 10:12:48 PM UTC-8, Cameron McCormack wrote:
> You can use a crashtest for this under layout/svg/crashtests/. crashtests
> are like reftests that don't actually check against a reference, but will
> just fail if the test crashes, or causes some assert
On Wed, Mar 6, 2019, at 5:05 PM, violet.bugrep...@gmail.com wrote:
> However, both the reviewer and me don't know how to put a timeout test.
> It won't crash in any case, so it's not a crash test. But the mochitest
> can only be used to check JavaScript assertion, if I us
Hi,
When fixing [this](https://phabricator.services.mozilla.com/D20947) bug, I need
to add a test. If the patch is not applied, the test will spend almost infinite
time to render a SVG; while if the patch is applied, it can be rendered
instantly.
However, both the reviewer and me don't
Just a heads up, it looks like this landed sometime last week for platforms
that support PGO.
This has an unintended consequence of making it look like perf data for
integration branches went awol, but in fact you need to switch from looking
at "opt" data to "pgo" data. Unfortunately since we didn
On 21/01/2019 10:18, Jan de Mooij wrote:
On Fri, Jan 18, 2019 at 10:36 PM Joel Maher wrote:
Are there any concerns with this latest proposal?
This proposal sounds great to me. Thank you!
+1. This seems like the right first step to me.
___
dev-pl
be. In the data set from July-December (H2 2018) there were
>>> 11 instances of tests that we originally only scheduled in the OPT config
>>> and we didn't have PGO or Debug test jobs to point out the regression (this
>>> is due to scheduling choices). Worse case
is allows for try to be faster as needed, continue to offer
peace of mind by running the tests on m-c (and sheriffs can backfill if
needed), and removes confusion about building/testing locally vs try. This
would be similar to what we already see where many people only test opt on
try and land and
we originally only scheduled in the OPT config and
> we didn't have PGO or Debug test jobs to point out the regression (this is
> due to scheduling choices). Worse case scenario is finding the regression
> on PGO up to 1 hour later 11 times or roughly 2x/month. Backfilling to
>
changes to compiled code I
occasionally want to both rebuild and run tests on opt (e.g. because
some test changes also require changes to moz.build files that could
break the build in a way that isn't caught by an artifact build). In
this case adding an extra hour of end-to-end time on try
would be. In the data set from July-December (H2 2018) there were 11 instances
of tests that we originally only scheduled in the OPT config and we didn't have
PGO or Debug test jobs to point out the regression (this is due to scheduling
choices). Worse case scenario is finding the regre
Earlier today I landed a fix for bug 1517532 that will mean that an
artifact build with MOZ_PGO set will pull artifacts from an automation pgo
build. As a result artifact pgo builds as trigger by a "-p all
--artifact..." will succeed now as well (and consume pgo'd artifacts).
If we end up wanting
>* do we turn off builds as well? I had proposed just the tests, if we decide
>to turn off talos it would make sense to turn off builds.
Would turning off opt builds cause problems if you want to mozregression
an opt build? And would this be an issue? (obviously it might be for
opt-only failur
high probability PGO is
> going away. Would it make sense to wait for that to project to wrap up?
>
> -Andrew
>
> On Thu, Jan 3, 2019 at 11:20 AM jmaher wrote:
>
> > I would like to propose that we do not run tests on linux64-opt,
> > windows7-opt, and windows10-opt.
&
thanks everyone for your comments on this. It sounds like from a practical
standpoint until we can get the runtimes of PGO builds on try and in
integration to be less than debug build times this is not a desirable change.
A few common responses:
* artifact opt builds on try are fast for quick i
On Fri, Jan 4, 2019 at 11:57 AM Nicholas Alexander
wrote:
> One reason we might not want to stop producing opt builds: we produce
> artifact builds against opt (and debug, with --enable-debug in the local
> mozconfig). It'll be very odd to have --enable-artifact-build and
> _require_ --enable-pgo
On Thu, Jan 3, 2019 at 1:47 PM Chris AtLee wrote:
> Thank you Joel for writing up this proposal!
>
> Are you also proposing that we stop the linux64-opt and win64-opt builds as
> well, except for leaving them as an available option on try? If we're not
> testing them on integration or release bra
Nicholas Alexander wrote on 03.01.19 18:41:
> 1) automation builds need a special configuration piece in place to
> properly support artifact builds. Almost certainly that's not in place for
> PGO builds, since it's such an unusual thing to do: "you want to pack PGO
> binaries into a development
doing the builds.
On Thu, 3 Jan 2019 at 11:20, jmaher wrote:
> I would like to propose that we do not run tests on linux64-opt,
> windows7-opt, and windows10-opt.
>
> Why am I proposing this:
> 1) All test regressions that were found on trunk are mostly on debug, and
> in fewer
On Thu, Jan 3, 2019 at 7:22 PM Steve Fink wrote:
> I get
> quite a bit of value out of the resulting faster hack-try-debug cycles;
> I would imagine it to be at least as useful to have a turnaround time of
> 1 hour for opt vs 2 hours for pgo.
>
+1. The past week I've been Try-debugging (1) an in
I don't think its much burden, but when we have code complexity it can add
up with a matter of "how useful is this really.." Even if maintenance
burden is low it is still a tradeoff. I'm just saying I suspect its
possible to do this, but not sure if it is useful in the end (and I'm not
looking to m
ebug Wd1 results back at the same time
(67-68min) and pgo Wd1 results take twice as long (134min). I imagine
there are much slower test jobs that make this situation cloudier, but
assuming the general pictures holds then it seems like opt is mostly
redundant with debug.
I think a good rule of t
On 01/03/2019 10:07 AM, Justin Wood wrote:
on the specific proposal front I can envision us allowing tests to be run
on non-pgo builds via triggers (so never by default, but always
backfillable/selectable) should someone need to try and bisect an issue
that is discovered... I'm not sure if the co
once we leave central we do not build OPT, this is
a very low risk.
It's not just tryserver build times. Presumably this will also tend to
increase the time between a patch landing on inbound or autoland and
any resulting test failures showing up.
This seems like a negative in that it means
e do not run tests on linux64-opt,
>> windows7-opt, and windows10-opt.
>>
>> Why am I proposing this:
>> 1) All test regressions that were found on trunk are mostly on debug, and
>> in fewer cases on PGO. There are no unique regressions found in the last 6
>> months (al
On 03/01/2019 16:17, jmaher wrote:
I would like to propose that we do not run tests on linux64-opt, windows7-opt,
and windows10-opt.
Why am I proposing this:
1) All test regressions that were found on trunk are mostly on debug, and in
fewer cases on PGO. There are no unique regressions found
On Thu, Jan 3, 2019 at 8:43 AM Brian Grinstead
wrote:
> Artifact builds don’t work with PGO, do they? When I do `-p all` on an
> artifact try push I get busted PGO builds (for example:
> https://treeherder.mozilla.org/#/jobs?repo=try&revision=7f8ead55ca97821c60ef38af4dec01b8bff0fdf3&selectedJob=2
.
Brian
> On Jan 3, 2019, at 8:17 AM, jmaher wrote:
>
> I would like to propose that we do not run tests on linux64-opt,
> windows7-opt, and windows10-opt.
>
> Why am I proposing this:
> 1) All test regressions that were found on trunk are mostly on debug, and in
> fewer
er wrote:
> I would like to propose that we do not run tests on linux64-opt,
> windows7-opt, and windows10-opt.
>
> Why am I proposing this:
> 1) All test regressions that were found on trunk are mostly on debug, and
> in fewer cases on PGO. There are no unique regressions foun
uiring a full build for frontend-only
changes would increase the turnaround time and resource savings in (3).
Brian
> On Jan 3, 2019, at 8:17 AM, jmaher wrote:
>
> I would like to propose that we do not run tests on linux64-opt,
> windows7-opt, and windows10-opt.
>
> Why am
On 03/01/2019 16:17, jmaher wrote:
What are the risks associated with this?
1) try server build times will increase as we will be testing on PGO instead of
OPT
2) we could miss a regression that only shows up on OPT, but if we only ship
PGO and once we leave central we do not build OPT, this is
Can we set it up so we can manually runs tests on opt builds; but they
aren't by default?
I've had many instances where opt (and pgo) fail; but I can't
reproduce a test failure locally and can only do it on try. Letting me
run that test on the opt build will save the additional
I would like to propose that we do not run tests on linux64-opt, windows7-opt,
and windows10-opt.
Why am I proposing this:
1) All test regressions that were found on trunk are mostly on debug, and in
fewer cases on PGO. There are no unique regressions found in the last 6 months
(all the data
n that explains what runs in
> which process?
>
> Thanks,
> Ehsan
>
> On Thu, Nov 1, 2018 at 5:44 PM Geoffrey Brown wrote:
>
>> This week some familiar tier 1 test suites began running on a new test
>> platform labelled "Android 7.0 x86" on treeherder.
Thanks for the list, that looks like a nice set of defaults. I wanted to add a
few related things:
- In addition to setting prefs via machrc you can use `--setpref`. For example
`./mach run --setpref browser.newtabpage.enabled=false` which would then
override the value in your machrc file. Both
Ehsan
>
> On Thu, Nov 1, 2018 at 5:44 PM Geoffrey Brown wrote:
>
> > This week some familiar tier 1 test suites began running on a new test
> > platform labelled "Android 7.0 x86" on treeherder. Only a few test suites
> > are running so far; more are planned.
>
Nov 1, 2018 at 5:44 PM Geoffrey Brown wrote:
> This week some familiar tier 1 test suites began running on a new test
> platform labelled "Android 7.0 x86" on treeherder. Only a few test suites
> are running so far; more are planned.
>
> Like the existing "Android 4.
I'm also excited to see this up and running, as it will probably be
quite useful with testing webrender on android. Thank you!
On Thu, Nov 1, 2018 at 6:40 PM Chris Peterson wrote:
>
> On 2018-11-01 3:06 PM, Nicholas Alexander wrote:
> >> Like the existing "Android 4.
On 2018-11-01 3:06 PM, Nicholas Alexander wrote:
Like the existing "Android 4.2" and "Android 4.3" test platforms, these
tests run in an Android emulator running in a docker container (the same
Ubuntu-based image used for linux64 tests). The new platform runs an x8
On Thu, Nov 1, 2018 at 2:44 PM Geoffrey Brown wrote:
> This week some familiar tier 1 test suites began running on a new test
> platform labelled "Android 7.0 x86" on treeherder. Only a few test suites
> are running so far; more are planned.
>
> Like the existing "
This week some familiar tier 1 test suites began running on a new test
platform labelled "Android 7.0 x86" on treeherder. Only a few test suites
are running so far; more are planned.
Like the existing "Android 4.2" and "Android 4.3" test platforms, these
tests run
of `ok()` test helpers provided by Assert.jsm,
> SimpleTest.js and browser-test.js (as well as some test helpers based on
> them).
>
> `ok()` used to accept up to four arguments: condition, name,
> exception/diagnostic and stack.
> After bug 1467712 lands, `ok()`will only accept t
We are about to land https://bugzilla.mozilla.org/show_bug.cgi?id=1467712
which changes the behavior of `ok()` test helpers provided by Assert.jsm,
SimpleTest.js and browser-test.js (as well as some test helpers based on
them).
`ok()` used to accept up to four arguments: condition, name
On Tue, Oct 23, 2018, at 9:53 AM, Kyle Machulis wrote:
> Adding the following prefs turns off all new profile about:home loads
> and just starts the browser with about:blank:
>
> [runprefs]
> browser.startup.blankWindow=true
> browser.newtabpage.enabled=false
> browser.startup.firstrunSkipsHomepag
I've got some gecko checkouts set in debug mode that I often have to
rebuild, meaning their test profile may be removed/regenerated at times.
Bringing up a new firefox profile with the default about:home and privacy
warnings can take a while. Thanks to some help from nalexander in #build,
have to write your own observer code.
We need projects who would be writing their own pref observers to try this
library so we can land it in Firefox. The library is currently in review
and once we have a JavaScript project to test it with, it can be landed in
Mozilla Central.
If you are interested
ded
> as build artefacts so they are available to download by testers (both human
> and machine). I also need to tell the test machines about them so tests can
> be run.
>
> Can anyone advise me how I might go about doing this?
>
For Firefox builds, if you put a file in/builds/worker/ar
human and machine). I also need to tell the test machines about
them so tests can be run.
Can anyone advise me how I might go about doing this?
GL
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
mhoye wrote:
Does "just do it" imply that it's now OK to import that stuff without
an analog of the previous r+ from Gerv?
I'm putting together a licensing runbook with Legal's help, and the
aim of that will be getting us to that point. As well, Glob is
building some logic into Phabricator to
On 2018-05-18 3:30 AM, Henri Sivonen wrote:
On Thu, May 17, 2018 at 8:31 PM, mhoye wrote:
Well, more than a day or two. The MIT license is fine to include, and we
have a pile of MIT-licensed code in-tree already.
Other already-in-tree MPL-2.0 compatible licenses - the "just do it" set,
basic
Apache 2.0, BSD 2- and 3-clause, LGPL 2.1 and 3.0, GPL
> 3.0 and the Unicode Consortium's ICU.
Does "just do it" imply that it's now OK to import that stuff without
an analog of the previous r+ from Gerv?
> For anything not on that list a legal bug is def. the next
her an MIT- or an
> > > MIT-like license, so in theory, we need to copy the license somewhere in
> > > the test repo, right?
> >
> > I think that this is my question to answer now; I've taken on licensing
> > questions in Gerv's absence. I'm new
On 2018-04-24 10:36 AM, mhoye wrote:
On 2018-04-24 10:24 AM, David Teller wrote:
What's our policy for this? Are there any restrictions? All the
frameworks I currently have at hand are have either an MIT- or an
MIT-like license, so in theory, we need to copy the license somewhere in
the
; testing/profiles/prefs_general.js,
>
> Use this instead:
> testing/profiles/unittest/user.js
>
>
> # Overview
>
> I'm currently in the process of consolidating prefs and extensions
> across all our test harnesses into a single shared location
> (testing/pro
Also sprach Andrew Halberstadt:
> On Wed, May 16, 2018 at 11:07 AM Andreas Tolfsen wrote:
>> Any plans to consolidate the Mn preferences, currently stored in
>> geckoinstance.py?
>
> Yes, but I don't have a timeline. I want to at least finish up
> reftest and xpcshell first. Then time permitting
On Wed, May 16, 2018 at 11:07 AM Andreas Tolfsen wrote:
> Any plans to consolidate the Mn preferences, currently stored in
> geckoinstance.py?
>
Yes, but I don't have a timeline. I want to at least finish up reftest and
xpcshell first. Then time permitting, I'd like to finish up marionette,
cppu
Also sprach Andrew Halberstadt:
> tl;dr - You can now set prefs and install extensions across multiple
> harnesses by modifying the relevant profile under testing/profiles.
This seems like a good change.
> # Further Work
>
> I'm in the middle of migrating harnesses over to this new system.
> No
ently in the process of consolidating prefs and extensions
across all our test harnesses into a single shared location
(testing/profiles
<https://searchfox.org/mozilla-central/source/testing/profiles>). If you
navigate there you'll find several different
"profile-like"
With bug 1445716, there is a new Android-only, tier-1 test suite on
treeherder: geckoview-junit (gv-junit). These are are on-device Android
junit tests written in support of geckoview, running in our standard
Android emulator environment on aws instances.
You can run these tests on a local
her an MIT- or an
MIT-like license, so in theory, we need to copy the license somewhere in
the test repo, right?
I think that this is my question to answer now; I've taken on licensing
questions in Gerv's absence. I'm new to this part of the job, so it'll
take me a day or tw
tly have at hand are have either an MIT- or an
> MIT-like license, so in theory, we need to copy the license somewhere in
> the test repo, right?
Per https://www.mozilla.org/en-US/MPL/license-policy/ it sounds like
you would need to file a licensing bug, even if it's MPL-compatible,
becaus
we need to copy the license somewhere in
the test repo, right?
Cheers,
David
___
dev-platform mailing list
dev-platform@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform
e! I need some help gathering some real-world data about DLLs
>> that get loaded into Firefox on Windows (re bug 1435827). I have created a
>> test build that outputs runtime DLL information to a text file on disk, and
>> I'd like to see what results you get on your machine(s).
2aaef49".
- Xidorn
On Fri, Apr 20, 2018, at 6:12 PM, Carl Corcoran wrote:
> Hello everyone! I need some help gathering some real-world data about DLLs
> that get loaded into Firefox on Windows (re bug 1435827). I have created a
> test build that outputs runtime DLL information t
Hello everyone! I need some help gathering some real-world data about DLLs
that get loaded into Firefox on Windows (re bug 1435827). I have created a
test build that outputs runtime DLL information to a text file on disk, and
I'd like to see what results you get on your machine(s).
If you
On Monday, October 2, 2017 at 11:12:03 AM UTC-6, Geoffrey Brown wrote:
> Today the test-verify test task will start running as a tier 2 job.
> Look for the "TV" symbol on treeherder, on linux-64 test platforms.
>
> TV is intended as an "early warning system" for i
t; Hello,
>
> I would like to test Firefox browser performance test - using default
> settings and using customized settings(about:config).
>
> Performance parameters I am looking :
> 1. Time - required to open webpage
> 2. Number of packets transferred between the browser and websi
Firefox has an extensive performance testing framework, called Talos
https://wiki.mozilla.org/Buildbot/Talos might be a good place to start.
Dustin
2017-11-17 10:02 GMT-05:00 :
> Hello,
>
> I would like to test Firefox browser performance test - using default
> settings and usin
Hello,
I would like to test Firefox browser performance test - using default settings
and using customized settings(about:config).
Performance parameters I am looking :
1. Time - required to open webpage
2. Number of packets transferred between the browser and website - while
opening
the
Also AMO is accessible to 57 users to download from there instead too :)
On Tue, Oct 3, 2017 at 4:09 PM, Andrew McKay wrote:
> Just to close the loop on this thread, in 57 this will no longer
> disable multi-e10s.
>
> https://bugzilla.mozilla.org/show_bug.cgi?id=1404098
>
> Thanks for the heads
On Tue, Oct 3, 2017 at 1:05 PM, Andrew Halberstadt
wrote:
> This is really great Geoff! Hopefully it can cut down the number of new
> intermittents we introduce to the tree. Do you know if orangefactor or
> ActiveData can track the rate of new incoming intermittents? Would be neat
> to see how muc
PM Geoffrey Brown wrote:
> The mochitest, reftest, and xpcshell test harnesses now support a
> --verify option. For example:
>
> mach mochitest
> docshell/test/test_anchor_scroll_after_document_open.html --verify
>
> In verify mode, the requested test is run multiple times
Just to close the loop on this thread, in 57 this will no longer
disable multi-e10s.
https://bugzilla.mozilla.org/show_bug.cgi?id=1404098
Thanks for the heads up Ben.
On 27 September 2017 at 10:53, Ben Kelly wrote:
> It disables multi-e10s. Forced to one content process.
>
> On Sep 27, 2017 12
This is super cool! Thanks Geoffrey!
On 2017-10-02 1:08 PM, Geoffrey Brown wrote:
> The mochitest, reftest, and xpcshell test harnesses now support a
> --verify option. For example:
>
> mach mochitest
> docshell/test/test_anchor_scroll_after_document_open.html --verify
>
>
This is very cool, Geoff! People have been talking about this idea for a
long, so it is great to see it actually running. I'm glad to see chaos
mode being tested, too.
On 2017-10-02 10:11 AM, Geoffrey Brown wrote:
Today the test-verify test task will start running as a tier 2 job.
Loo
Today the test-verify test task will start running as a tier 2 job.
Look for the "TV" symbol on treeherder, on linux-64 test platforms.
TV is intended as an "early warning system" for identifying the
introduction of intermittent test failures. When a mochitest, reftest,
or x
The mochitest, reftest, and xpcshell test harnesses now support a
--verify option. For example:
mach mochitest
docshell/test/test_anchor_scroll_after_document_open.html --verify
In verify mode, the requested test is run multiple times, in various
"modes", in hopes of quickly f
1 - 100 of 475 matches
Mail list logo