On Fri, Nov 8, 2013 at 2:01 AM, L. David Baron wrote:
> I think this depends on what you mean by "known intermittent
> failures". If a known intermittent failure is the result of any
> regression that leads to a previously-passing test failing
> intermittently, I'd be pretty uncomfortable with th
On Thursday 2013-11-07 14:13 +0200, Aryeh Gregor wrote:
> On Wed, Nov 6, 2013 at 6:46 PM, Ryan VanderMeulen wrote:
> > I'm just afraid we're going to end up in the same situation we're already in
> > with intermittent failures where the developer looks at it and says "that
> > couldn't possibly be
On Wed, Nov 6, 2013 at 5:49 PM, Ryan VanderMeulen wrote:
> What do we gain by having results that can't be trusted?
The same that we gain from allowing any try pushes that don't run
every single test. It's a tradeoff between reliability and time, not
black-and-white. For instance, if I change a
On 11/05/2013 07:35 AM, James Graham wrote:
On 05/11/13 15:20, Till Schneidereit wrote:
Do we have any way to identify tests that break particularly often for
specific areas? If so, we could create a mach command that runs just
these
tests and finishes quickly. Something like `mach canary-tes
On 11/06/2013 03:58 AM, Aryeh Gregor wrote:
> Has anyone considered allowing try pushes to run only specified
> directories of tests, and to allow incremental builds rather than
> clobbers on try? This would make try a heck of lot faster and
> resource-efficient, for those who are willing to accep
On 11/6/2013 10:57 AM, James Graham wrote:
It could be a win if the results are misleading infrequently enough
compared to the time savings that the expectation time for getting a
patch to stick on m-c decreases. That depends on the P(result is
different between try and clobber) and the time savi
On 06/11/13 15:49, Ryan VanderMeulen wrote:
On 11/6/2013 6:58 AM, Aryeh Gregor wrote:
Has anyone considered allowing try pushes to run only specified
directories of tests, and to allow incremental builds rather than
clobbers on try? This would make try a heck of lot faster and
resource-efficien
On 11/6/2013 6:58 AM, Aryeh Gregor wrote:
Has anyone considered allowing try pushes to run only specified
directories of tests, and to allow incremental builds rather than
clobbers on try? This would make try a heck of lot faster and
resource-efficient, for those who are willing to accept less c
Has anyone considered allowing try pushes to run only specified
directories of tests, and to allow incremental builds rather than
clobbers on try? This would make try a heck of lot faster and
resource-efficient, for those who are willing to accept less certain
results.
On Wed, Nov 6, 2013 at 12:5
On Tue, Nov 5, 2013 at 5:09 PM, Ed Morley wrote:
> Many cases of
> failures would have been caught be just a simple single-platform build+run
> of a single directory's worth of tests.
Requiring a single-platform build seems reasonable. However, it's much
harder to figure out what tests need to be
On Tuesday 2013-11-05 14:44 +, David Burns wrote:
> We appear to be doing 1 backout for every 15 pushes on a rough
> average[4]. This number I am sure you can all agree is far too high
> especially if we think about the figures that John O'Duinn
> suggests[5] for the cost of each push for runni
On Tue, Nov 5, 2013 at 7:10 AM, James Graham wrote:
>
> So, as far as I can tell that the heart of the problem is that the
> end-to-end time for the build+test infrastructure is unworkably slow. I
> understand that waiting half a dozen hours — a significant fraction of a
> work day — for a try run
On 05/11/2013 18:11, Steve Fink wrote:
These stats are *awesome*! I've been wanting them for a long time, but
never got around to generating them myself. Can we track these on an
ongoing basis?
Sure! Since we need to be working on the engineering productivity as a
whole I think this could be a
On 15:10, Tue, 05 Nov, James Graham wrote:
On 05/11/13 14:57, Kyle Huey wrote:
On Tue, Nov 5, 2013 at 10:44 PM, David Burns wrote:
We appear to be doing 1 backout for every 15 pushes on a rough average[4].
This number I am sure you can all agree is far too high especially if we
think about th
On 11/05/2013 01:49 PM, Chris Peterson wrote:
On 11/5/13, 7:10 AM, James Graham wrote:
Wht data do we currently have about why the wait time is so long? If
this data doesn't exist, can we start to collect it? Are there easy wins
to be had, or do we need to think about restructuring the way that
On 11/5/13, 7:10 AM, James Graham wrote:
Wht data do we currently have about why the wait time is so long? If
this data doesn't exist, can we start to collect it? Are there easy wins
to be had, or do we need to think about restructuring the way that we do
builds and/or testing to achieve greater
These stats are *awesome*! I've been wanting them for a long time, but
never got around to generating them myself. Can we track these on an
ongoing basis?
On 11/05/2013 07:09 AM, Ed Morley wrote:
> On 05 November 2013 14:44:27, David Burns wrote:
>> We appear to be doing 1 backout for every 15 pus
using https://treestatus.mozilla.org/mozilla-inbound, I looked at the reasons
for tree closures (usually associated with backouts), going back to 50 status
messages, I found:
38 test issues
14 build issues
9 infrastructure issues
2 other issues
Note, some of these closures had >1 issue documen
A proposal to get this fixed would be immense and ties into the projects
we are doing to help keep engineers productive. Things like parallel
testing[1] should be done to as many test suites as possible, a smoke
test to get feedback loops tighter.
The main one for me is getting easily accessib
On 05/11/13 15:20, Till Schneidereit wrote:
Do we have any way to identify tests that break particularly often for
specific areas? If so, we could create a mach command that runs just these
tests and finishes quickly. Something like `mach canary-tests`.
Isn't the end game for this kind of appr
On 05 November 2013 15:20:06, Till Schneidereit wrote:
Do we have any way to identify tests that break particularly often for
specific areas? If so, we could create a mach command that runs just
these tests and finishes quickly. Something like `mach canary-tests`.
Agree this would be a good way
On Tue, Nov 5, 2013 at 4:09 PM, Ed Morley wrote:
> On 05 November 2013 14:44:27, David Burns wrote:
>
>> We appear to be doing 1 backout for every 15 pushes on a rough
>> average[4].
>>
>
> I've been thinking about this some more - and I believe the ratio is
> probably actually even worse than th
On 05/11/13 14:57, Kyle Huey wrote:
On Tue, Nov 5, 2013 at 10:44 PM, David Burns wrote:
We appear to be doing 1 backout for every 15 pushes on a rough average[4].
This number I am sure you can all agree is far too high especially if we
think about the figures that John O'Duinn suggests[5] for
On 05 November 2013 14:44:27, David Burns wrote:
We appear to be doing 1 backout for every 15 pushes on a rough
average[4].
I've been thinking about this some more - and I believe the ratio is
probably actually even worse than the numbers suggest, since:
* Depending on how the backouts are per
On Tue, Nov 5, 2013 at 10:44 PM, David Burns wrote:
> We appear to be doing 1 backout for every 15 pushes on a rough average[4].
> This number I am sure you can all agree is far too high especially if we
> think about the figures that John O'Duinn suggests[5] for the cost of each
> push for runni
After the major tree closure[1] last week I wanted to see how it
impacted the tree closure stats (stats below in this email) that I have
been watching. I have also been looking to see how many backouts the
sheriffs are doing and seeing how they correlate. For those interested
the tree closure a
26 matches
Mail list logo