On Wed, Jan 5, 2022 at 10:38 PM Mick Semb Wever wrote:
> +(?) Is the devil we know
>>
>
> + Bi-directional relationship between patches showing which branches it
> was applied to (and how). From the original commit or any of the merge
> commits I can see which branches, and where the original co
gt;
> git checkout -b patch-4.0-review
>
> git merge -s ours patch-4.1~1
>
> git merge --no-commit patch-4.1
>
> git checkout patch-4.0 .
>
> git commit
>
>
>
>
>
> *From: *bened...@apache.org
> *Date: *Wednesday, 5 January 2022 at 21:07
> *To: *Mick
-4.0 .
git commit
From: bened...@apache.org
Date: Wednesday, 5 January 2022 at 21:07
To: Mick Semb Wever
Cc: dev
Subject: Re: [DISCUSS] Releasable trunk and quality
> If you see a merge commit in the history, isn't it normal to presume that it
> will contain the additional chan
...@datastax.com>>,
dev mailto:dev@cassandra.apache.org>>
Subject: Re: [DISCUSS] Releasable trunk and quality
That all sounds terribly complicated to me.
My view is that we should switch to the branch strategy outlined by Henrik (I
happen to prefer it anyway) and move to GitHub integrati
> If you see a merge commit in the history, isn't it normal to presume that it
> will contain the additional change for that branch for the parent commit
> getting merged in?
Sure, but it is exceptionally non-trivial to treat the work as a single diff in
any standard UX. In practice it becomes
> - hides changes inside the merge commit
>
This is what all merge commits do. If you see a merge commit in the
history, isn't it normal to presume that it will contain the additional
change for that branch for the parent commit getting merged in?
> - is exposed to race w/other committers acro
>
>>
>>
>>
>>
>> *From: *bened...@apache.org
>> *Date: *Tuesday, 4 January 2022 at 23:52
>> *To: *David Capwell , Joshua McKenzie <
>> jmcken...@apache.org>
>> *Cc: *Henrik Ingo , dev <
>> dev@cassandra.apache.org>
>> *Subject: *
s wrong).
>
>
>
>
>
> *From: *bened...@apache.org
> *Date: *Tuesday, 4 January 2022 at 23:52
> *To: *David Capwell , Joshua McKenzie <
> jmcken...@apache.org>
> *Cc: *Henrik Ingo , dev <
> dev@cassandra.apache.org>
> *Subject: *Re: [DISCUSS] Releasab
headache enough when it goes wrong).
From: bened...@apache.org
Date: Tuesday, 4 January 2022 at 23:52
To: David Capwell , Joshua McKenzie
Cc: Henrik Ingo , dev
Subject: Re: [DISCUSS] Releasable trunk and quality
That all sounds terribly complicated to me.
My view is that we should switch to the
23:33
To: Joshua McKenzie
Cc: Henrik Ingo , dev
Subject: Re: [DISCUSS] Releasable trunk and quality
The more I think on it, the more I am anyway strongly -1 on having some
bifurcated commit process. We should decide on a uniform commit process for the
whole project, for all patches, whatever
> The more I think on it, the more I am anyway strongly -1 on having some
> bifurcated commit process. We should decide on a uniform commit process for
> the whole project, for all patches, whatever that may be.
Making the process stable and handle all the random things we need to handle
takes
I put together a draft confluence wiki page (login required) for the Build
Lead role covering what we discussed in the thread here. Link:
https://cwiki.apache.org/confluence/pages/resumedraft.action?draftId=199527692&draftShareId=96dfa1ef-d927-427a-bff8-0cf711c790c9&;
The only potentially controve
FWIW, I thought I could link to an example MongoDB commit:
https://github.com/mongodb/mongo/commit/dec388494b652488259072cf61fd987af3fa8470
* Fixes start from trunk or whatever is the highest version that includes
the bug
* It is then cherry picked to each stable version that needs to fix. Above
2021 at 18:53
To: dev@cassandra.apache.org
Subject: Re: [DISCUSS] Releasable trunk and quality
>
> I like a change originating from just one commit, and having tracking
> visible across the branches. This gives you immediate information about
> where and how the change was applied without
>
> I like a change originating from just one commit, and having tracking
> visible across the branches. This gives you immediate information about
> where and how the change was applied without having to go to the jira
> ticket (and relying on it being accurate)
I have the exact opposite experien
Does somebody else use the git workflow we do as of now in Apache
universe? Are not we quite unique? While I do share the same opinion
Mick has in his last response, I also see the disadvantage in having
the commit history polluted by merges. I am genuinely curious if there
is any other Apache proj
>
>
> > Merge commits aren’t that useful
> >
> I keep coming back to this. Arguably the only benefit they offer now is
> procedurally forcing us to not miss a bugfix on a branch, but given how
> much we amend many things presently anyway that dilutes that benefit.
>
Doesn't this come down to ho
different CI pipeline for multi-version
> development
> * It is particularly not worth countenancing solely to retain the
> limited utility of merge commits
>
>
>
> From: Mick Semb Wever
> Date: Sunday, 12 December 2021 at 11:47
> To: dev@cassandra.apache.org
> Subject
1:47
To: dev@cassandra.apache.org
Subject: Re: [DISCUSS] Releasable trunk and quality
> I find it cleaner that work is found associated to one sha on the hardest
> > branch, and we treat (or should be) CI holistically across branches.
>
> If we -s ours and amend merge commits on things that str
> I find it cleaner that work is found associated to one sha on the hardest
> > branch, and we treat (or should be) CI holistically across branches.
>
> If we -s ours and amend merge commits on things that straddle stuff like
> 8099, MS rewrite, Simulator, guardrails, etc, then we have multiple SHA
_only_ patches? If so, that seems rather weak.
>
> From: Brandon Williams
> Date: Thursday, 9 December 2021 at 20:25
> To: dev@cassandra.apache.org
> Subject: Re: [DISCUSS] Releasable trunk and quality
> +1 to trying trunk first.
>
> On Thu, Dec 9, 2021 at 1:52 PM Mick S
@cassandra.apache.org
Subject: Re: [DISCUSS] Releasable trunk and quality
+1 to trying trunk first.
On Thu, Dec 9, 2021 at 1:52 PM Mick Semb Wever wrote:
>
> >
> > So let me pose the question here to the list: is there anyone who would
> > like to advocate for the current merge strategy
Good with all the above (reasonable arguments) except I don't understand:
>
> I find it cleaner that work is found associated to one sha on the hardest
> branch, and we treat (or should be) CI holistically across branches.
If we -s ours and amend merge commits on things that straddle stuff like
80
+1 to trying trunk first.
On Thu, Dec 9, 2021 at 1:52 PM Mick Semb Wever wrote:
>
> >
> > So let me pose the question here to the list: is there anyone who would
> > like to advocate for the current merge strategy (apply to oldest LTS, merge
> > up, often -s ours w/new patch applied + amend) inst
+1 on Mick’s suggestion (nb)
On Thu, 9 Dec 2021 at 14:46, Mick Semb Wever wrote:
> >
> > So let me pose the question here to the list: is there anyone who would
> > like to advocate for the current merge strategy (apply to oldest LTS,
> merge
> > up, often -s ours w/new patch applied + amend) i
>
> So let me pose the question here to the list: is there anyone who would
> like to advocate for the current merge strategy (apply to oldest LTS, merge
> up, often -s ours w/new patch applied + amend) instead of "apply to trunk
> and cherry-pick back to LTS"?
I'm in favour of the current merge
On Tue, Dec 7, 2021 at 11:13 AM Joshua McKenzie wrote:
>
> I'd frame the reasoning differently: Our current merge strategy is
> vestigial and we can't rely on it in many, if not most, cases. Patches
> rarely merge cleanly across majors requiring -s ours w/amend or other
> changes per branch. This
I'd frame the reasoning differently: Our current merge strategy is
vestigial and we can't rely on it in many, if not most, cases. Patches
rarely merge cleanly across majors requiring -s ours w/amend or other
changes per branch. This effectively clutters up our git history, hides
multi-branch change
On Tue, Dec 7, 2021 at 8:18 AM Joshua McKenzie wrote:
> So let me pose the question here to the list: is there anyone who would
> like to advocate for the current merge strategy (apply to oldest LTS, merge
> up, often -s ours w/new patch applied + amend) instead of "apply to trunk
> and cherry-pic
are maintained outside of the project as well.
>
> From: Joshua McKenzie
> Date: Tuesday, 7 December 2021 at 13:08
> To: dev@cassandra.apache.org
> Subject: Re: [DISCUSS] Releasable trunk and quality
> >
> > it would be far preferable for consistency of behaviour to rely on
assandra.apache.org
Subject: Re: [DISCUSS] Releasable trunk and quality
>
> it would be far preferable for consistency of behaviour to rely on shared
> infrastructure if possible
>
For those of us using CircleCI, we can get a lot of the benefit by having a
script that rewrites and cleans up ci
re options via the tooling, and
> >> multi-branch
> >>>> reverts will be something we should document very clearly should we
> even
> >>>> choose to go that route (a lot of room to make mistakes there).
> >>>>
> >>>
ve changes
>>>> (i.e. potentially destabilizing) to be happening on trunk only, so
>> perhaps
>>>> we can get away with slightly different workflows or policies based on
>>>> whether you're doing a multi-branch bugfix or a feature on trunk. Bears
>>
consistency
>> of behaviour to rely on shared infrastructure if possible. I would probably
>> be against mandating these scripts, at least.
>>
>> From: Joshua McKenzie
>> Date: Monday, 6 December 2021 at 22:20
>> To: dev@cassandra.apache.org
>> Subject: R
December 2021 at 22:20
> To: dev@cassandra.apache.org
> Subject: Re: [DISCUSS] Releasable trunk and quality
> As I work through the scripting on this, I don't know if we've documented
> or clarified the following (don't see it here:
> https://cassandra.apache.org/_/develop
game for revisiting our merge strategy. I don't see much
> >> difference in labor between merging between branches vs. preparing
> separate
> >> patches for an individual developer, however I'm sure there's
> maintenance
> >> and integration implicat
against mandating these scripts, at least.
From: Joshua McKenzie
Date: Monday, 6 December 2021 at 22:20
To: dev@cassandra.apache.org
Subject: Re: [DISCUSS] Releasable trunk and quality
As I work through the scripting on this, I don't know if we've documented
or clarified the following (do
wrote:
>>
>>> I raised this before, but to highlight it again: how do these approaches
>>> interface with our merge strategy?
>>>
>>> We might have to rebase several dependent merge commits and want to
>>> merge them atomically. So far as I know th
, given how
>> important these things are, should we consider revisiting our merge
>> strategy?
>>
>> From: Joshua McKenzie
>> Date: Wednesday, 17 November 2021 at 16:39
>> To: dev@cassandra.apache.org
>> Subject: Re: [DISCUSS] Releasable trunk and quality
>> Thank
merge
> strategy?
>
> From: Joshua McKenzie
> Date: Wednesday, 17 November 2021 at 16:39
> To: dev@cassandra.apache.org
> Subject: Re: [DISCUSS] Releasable trunk and quality
> Thanks for the feedback and insight Henrik; it's valuable to hear how other
> large complex
fantastic. If not, given how important these
things are, should we consider revisiting our merge strategy?
From: Joshua McKenzie
Date: Wednesday, 17 November 2021 at 16:39
To: dev@cassandra.apache.org
Subject: Re: [DISCUSS] Releasable trunk and quality
Thanks for the feedback and insight Henrik
Thanks for the feedback and insight Henrik; it's valuable to hear how other
large complex infra projects have tackled this problem set.
To attempt to summarize, what I got from your email:
[Phase one]
1) Build Barons: rotation where there's always someone active tying
failures to changes and addin
There's an old joke: How many people read Slashdot? The answer is 5. The
rest of us just write comments without reading... In that spirit, I wanted
to share some thoughts in response to your question, even if I know some of
it will have been said in this thread already :-)
Basically, I just want t
Thank you Josh.
“I think it would be helpful if we always ran the repeated test jobs at
CircleCI when we add a new test or modify an existing one. Running those
jobs, when applicable, could be a requirement before committing. This
wouldn't help us when the changes affect many different tests or we
To checkpoint this conversation and keep it going, the ideas I see
in-thread (light editorializing by me):
1. Blocking PR merge on CI being green (viable for single branch commits,
less so for multiple)
2. A change in our expected culture of "if you see something, fix
something" when it comes to te
Hi all,
we already have a way to confirm flakiness on circle by running the test
> repeatedly N times. Like 100 or 500. That has proven to work very well
> so far, at least for me. #collaborating #justfyi
I think it would be helpful if we always ran the repeated test jobs at
CircleCI when we add
>
> we noticed CI going from a
> steady 3-ish failures to many and it's getting fixed. So we're moving in
> the right direction imo.
>
An observation about this: there's tooling and technology widely in use to
help prevent ever getting into this state (to Benedict's point: blocking
merge on CI fail
I agree with David. CI has been pretty reliable besides the random
jenkins going down or timeout. The same 3 or 4 tests were the only flaky
ones in jenkins and Circle was very green. I bisected a couple failures
to legit code errors, David is fixing some more, others have as well, etc
It is good n
On Wed, Nov 3, 2021 at 1:26 PM David Capwell wrote:
> > I think we're going to need a system that
> > understands the difference between success, failure, and timeouts
>
>
> I am curious how this system can know that the timeout is not an actual
> failure. There was a bug in 4.0 with time seri
> It’s hard to gate commit on a clean CI run when there’s flaky tests
I agree, this is also why so much effort was done in 4.0 release to remove as
much as possible. Just over 1 month ago we were not really having a flaky test
issue (outside of the sporadic timeout issues; my circle ci runs wer
On Wed, Nov 3, 2021 at 12:35 PM bened...@apache.org wrote:
>
> The largest number of test failures turn out (as pointed out by David) to be
> due to how arcane it was to trigger the full test suite. Hopefully we can get
> on top of that, but I think a significant remaining issue is a lack of tru
being well).
From: Brandon Williams
Date: Wednesday, 3 November 2021 at 17:07
To: dev@cassandra.apache.org
Subject: Re: [DISCUSS] Releasable trunk and quality
On Mon, Nov 1, 2021 at 5:03 PM David Capwell wrote:
>
> > How do we define what "releasable trunk" means?
>
>
On Mon, Nov 1, 2021 at 5:03 PM David Capwell wrote:
>
> > How do we define what "releasable trunk" means?
>
> One thing I would love is for us to adopt a “run all tests needed to release
> before commit” mentality, and to link a successful run in JIRA when closing
> (we talked about this once in
>
> It'd be great to
> expand this, but it's been somewhat difficult to do, since last time a
> bootstrap test was attempted, it has immediately uncovered enough issues to
> keep us busy fixing them for quite some time. Maybe it's about time to try
> that again.
I'm going to go with a "yes please"
I'll merge 16262 and the Harry blog-post that accompanies it shortly.
Having 16262 merged will significantly reduce the amount of resistance one
has to overcome in order to write a fuzz test. But this, of course, only
covers short/small/unit-test-like tests.
For longer running tests, I guess for n
Did I hear my name? 😁
Sorry Josh, you are wrong :-) 2 out of 30 in two months were real bugs
discovered by pflaky tests and one of them was very hard to hit. So 6-7%. I
think that report I sent back then didn’t come through so the topic was
cleared in a follow up mail by Benjamin; with a lot of swe
To your point Jacek, I believe in the run up to 4.0 Ekaterina did some
analysis and something like 18% (correct me if I'm wrong here) of the test
failures we were considering "flaky tests" were actual product defects in
the database. With that in mind, we should be uncomfortable cutting a
release i
>
> we already have a way to confirm flakiness on circle by running the test
> repeatedly N times. Like 100 or 500. That has proven to work very well
> so far, at least for me. #collaborating #justfyi
>
It does not prove that it is the test flakiness. It still can be a bug in
the code which occurs
Hi,
we already have a way to confirm flakiness on circle by running the test
repeatedly N times. Like 100 or 500. That has proven to work very well
so far, at least for me. #collaborating #justfyi
On the 60+ failures it is not as bad as it looks. Let me explain. I have
been tracking failures in 4
>
> I don’t think means guaranteeing there are no failing tests (though
> ideally this would also happen), but about ensuring our best practices are
> followed for every merge. 4.0 took so long to release because of the amount
> of hidden work that was created by merging work that didn’t meet the
>
> I have to apologise here. CircleCI did not uncover these problems, apparently
> due to some way it resolves dependencies,
I double checked your CircleCI run for the trunk branch, and the problem
doesn’t have to do with “resolves dependencies”, the problem lies with our CI
being too complex an
> How do we define what "releasable trunk" means?
One thing I would love is for us to adopt a “run all tests needed to release
before commit” mentality, and to link a successful run in JIRA when closing (we
talked about this once in slack). If we look at CircleCI we currently do not
run all th
> How do we define what "releasable trunk" means?
For me, the major criteria is ensuring that work is not merged that is known to
require follow-up work, or could reasonably have been known to require
follow-up work if better QA practices had been followed.
So, a big part of this is ensuring we
63 matches
Mail list logo