>
> > I think if we want to do this, it should be extremely easy - by which I
> mean automatic, really. This shouldn’t be too tricky I think? We just need
> to produce a diff of new test classes and methods within existing classes.
Having a CircleCI job that automatically runs all new/modified te
> Side note, Butler is reporting CASSANDRA-17348 as open (it's resolved as a
> duplicate).
This is fixed.
On Wed, 10 Aug 2022 at 17:54, Josh McKenzie wrote:
> “ We can start by putting the bar at a lower level and raise the level
> over time when most of the flakies that we hit are above that level.”
> My only concern is only who and how will track that.
>
> What's Butler's logic for flagging things
> “ We can start by putting the bar at a lower level and raise the level over
> time when most of the flakies that we hit are above that level.”
> My only concern is only who and how will track that.
What's Butler's logic for flagging things flaky? Maybe a "flaky low" vs. "flaky
high" distinction
Perhaps flaky tests need to be handled differently. Is there a way to
build a statistical model of the current flakiness of the test that we can
then use during testing to accept the failures? So if an acceptable level
of flakiness is developed then if the test fails, it needs to be run again
or
> We can start by putting the bar at a lower level and raise the level over time
+1
> One simple approach that has been mentioned several time is to run the new
> tests added by a given patch in a loop using one of the CircleCI tasks
I think if we want to do this, it should be extremely easy
“ In my opinion, not all flakies are equals. Some fails every 10 runs, some
fails 1 in a 1000 runs.”
Agreed, for all not new tests/regressions which are also not infra related.
“ We can start by putting the bar at a lower level and raise the level over
time when most of the flakies that we hit are
At this point it is clear that we will probably never be able to remove
some level of flakiness from our tests. For me the questions are: 1) Where
do we draw the line for a release ? and 2) How do we maintain that line
over time?
In my opinion, not all flakies are equals. Some fails every 10 runs,
With that said, I guess we can just revise on a regular basis what exactly
> are the last flakes and not numbers which also change quickly up and down
> with the first change in the Infra.
>
+1, I am in favour of taking a pragmatic approach.
If flakies are identified and triaged enough that, wit
Re: 17738 - the ticket was about any new properties which are actually not
of the new types. It had to guarantee that there is no disconnect between
updating Settings Virtual Table after startup and JMX setters/getters. (In
one of its “brother” tickets the issues we found exist since 4.0) I bring
i
10 matches
Mail list logo