Staging Branches

2015-05-07 Thread Benedict Elliott Smith
A good practice as a committer applying a patch is to build and run the
unit tests before updating the main repository, but to do this for every
branch is infeasible and impacts local productivity. Alternatively,
uploading the result to your development tree and waiting a few hours for
CI to validate it is likely to result in a painful cycle of race-to-merge
conflicts, rebasing and waiting again for the tests to run.

So I would like to propose a new strategy: staging branches.

Every major branch would have a parallel branch:

cassandra-2.0 <- cassandra-2.0_staging
cassandra-2.1 <- cassandra-2.1_staging
trunk <- trunk_staging

On commit, the idea would be to perform the normal merge process on the
_staging branches only. CI would then run on every single git ref, and as
these passed we would fast forward the main branch to the latest validated
staging git ref. If one of them breaks, we go and edit the _staging branch
in place to correct the problem, and let CI run again.

So, a commit would look something like:

patch -> cassandra-2.0_staging -> cassandra-2.1_staging -> trunk_staging

wait for CI, see 2.0, 2.1 are fine but trunk is failing, so

git rebase -i trunk_staging 
fix the problem
git rebase --continue

wait for CI; all clear

git checkout cassandra-2.0; git merge cassandra-2.0_staging
git checkout cassandra-2.1; git merge cassandra-2.1_staging
git checkout trunk; git merge trunk_staging

This does introduce some extra steps to the merge process, and we will have
branches we edit the history of, but the amount of edited history will be
limited, and this will remain isolated from the main branches. I'm not sure
how averse to this people are. An alternative policy might be to enforce
that we merge locally and push to our development branches then await CI
approval before merging. We might only require this to be repeated if there
was a new merge conflict on final commit that could not automatically be
resolved (although auto-merge can break stuff too).

Thoughts? It seems if we want an "always releasable" set of branches, we
need something along these lines. I certainly break tests by mistake, or
the build itself, with alarming regularity. Fixing with merges leaves a
confusing git history, and leaves the build broken for everyone else in the
meantime, so patches applied after, and development branches based on top,
aren't sure if they broke anything themselves.


Re: Staging Branches

2015-05-07 Thread Aleksey Yeschenko
If the goal is to have branches that are always passing all the tests (aka 
‘stable’ branches), then I don’t see any workarounds,
so +1 to this.

I’d go for branch names that are less likely to be confused w/ final 
cassandra-X via autocomplete though: staging-2.0, staging-2.1, staging-trunk.

-- 
AY

On May 7, 2015 at 12:06:36, Benedict Elliott Smith (belliottsm...@datastax.com) 
wrote:

A good practice as a committer applying a patch is to build and run the  
unit tests before updating the main repository, but to do this for every  
branch is infeasible and impacts local productivity. Alternatively,  
uploading the result to your development tree and waiting a few hours for  
CI to validate it is likely to result in a painful cycle of race-to-merge  
conflicts, rebasing and waiting again for the tests to run.  

So I would like to propose a new strategy: staging branches.  

Every major branch would have a parallel branch:  

cassandra-2.0 <- cassandra-2.0_staging  
cassandra-2.1 <- cassandra-2.1_staging  
trunk <- trunk_staging  

On commit, the idea would be to perform the normal merge process on the  
_staging branches only. CI would then run on every single git ref, and as  
these passed we would fast forward the main branch to the latest validated  
staging git ref. If one of them breaks, we go and edit the _staging branch  
in place to correct the problem, and let CI run again.  

So, a commit would look something like:  

patch -> cassandra-2.0_staging -> cassandra-2.1_staging -> trunk_staging  

wait for CI, see 2.0, 2.1 are fine but trunk is failing, so  

git rebase -i trunk_staging   
fix the problem  
git rebase --continue  

wait for CI; all clear  

git checkout cassandra-2.0; git merge cassandra-2.0_staging  
git checkout cassandra-2.1; git merge cassandra-2.1_staging  
git checkout trunk; git merge trunk_staging  

This does introduce some extra steps to the merge process, and we will have  
branches we edit the history of, but the amount of edited history will be  
limited, and this will remain isolated from the main branches. I'm not sure  
how averse to this people are. An alternative policy might be to enforce  
that we merge locally and push to our development branches then await CI  
approval before merging. We might only require this to be repeated if there  
was a new merge conflict on final commit that could not automatically be  
resolved (although auto-merge can break stuff too).  

Thoughts? It seems if we want an "always releasable" set of branches, we  
need something along these lines. I certainly break tests by mistake, or  
the build itself, with alarming regularity. Fixing with merges leaves a  
confusing git history, and leaves the build broken for everyone else in the  
meantime, so patches applied after, and development branches based on top,  
aren't sure if they broke anything themselves.  


Re: Staging Branches

2015-05-07 Thread Gary Dusbabek
+1. I would ask that if we end up doing this that it be documented on the
wiki.

Gary.

On Thu, May 7, 2015 at 4:05 AM, Benedict Elliott Smith <
belliottsm...@datastax.com> wrote:

> A good practice as a committer applying a patch is to build and run the
> unit tests before updating the main repository, but to do this for every
> branch is infeasible and impacts local productivity. Alternatively,
> uploading the result to your development tree and waiting a few hours for
> CI to validate it is likely to result in a painful cycle of race-to-merge
> conflicts, rebasing and waiting again for the tests to run.
>
> So I would like to propose a new strategy: staging branches.
>
> Every major branch would have a parallel branch:
>
> cassandra-2.0 <- cassandra-2.0_staging
> cassandra-2.1 <- cassandra-2.1_staging
> trunk <- trunk_staging
>
> On commit, the idea would be to perform the normal merge process on the
> _staging branches only. CI would then run on every single git ref, and as
> these passed we would fast forward the main branch to the latest validated
> staging git ref. If one of them breaks, we go and edit the _staging branch
> in place to correct the problem, and let CI run again.
>
> So, a commit would look something like:
>
> patch -> cassandra-2.0_staging -> cassandra-2.1_staging -> trunk_staging
>
> wait for CI, see 2.0, 2.1 are fine but trunk is failing, so
>
> git rebase -i trunk_staging 
> fix the problem
> git rebase --continue
>
> wait for CI; all clear
>
> git checkout cassandra-2.0; git merge cassandra-2.0_staging
> git checkout cassandra-2.1; git merge cassandra-2.1_staging
> git checkout trunk; git merge trunk_staging
>
> This does introduce some extra steps to the merge process, and we will have
> branches we edit the history of, but the amount of edited history will be
> limited, and this will remain isolated from the main branches. I'm not sure
> how averse to this people are. An alternative policy might be to enforce
> that we merge locally and push to our development branches then await CI
> approval before merging. We might only require this to be repeated if there
> was a new merge conflict on final commit that could not automatically be
> resolved (although auto-merge can break stuff too).
>
> Thoughts? It seems if we want an "always releasable" set of branches, we
> need something along these lines. I certainly break tests by mistake, or
> the build itself, with alarming regularity. Fixing with merges leaves a
> confusing git history, and leaves the build broken for everyone else in the
> meantime, so patches applied after, and development branches based on top,
> aren't sure if they broke anything themselves.
>


Re: Staging Branches

2015-05-07 Thread Sylvain Lebresne
> If one of them breaks, we go and edit the _staging branch in place to
correct
> the problem, and let CI run again.

I would strongly advise against *in place* edits. If we do it, we'll end up
in
weird situations which will be annoying for everyone. Editing commits that
have
been shared is almost always a bad idea and that's especially true for
branch
that will have some amount of concurrency like those staging branches.

Even if such problems are rare, better to avoid them in the first place by
simply
commit new "fixup" commits as we currently do. Granted this give you a
slightly
less clean history but to the best of my knowledge, this hasn't been a pain
point so far.

> wait for CI; all clear
>
> git checkout cassandra-2.0; git merge cassandra-2.0_staging
> git checkout cassandra-2.1; git merge cassandra-2.1_staging
> git checkout trunk; git merge trunk_staging
>
> This does introduce some extra steps to the merge process

If we do this, we should really automate that last part (have the CI
environment merge the staging branch to the non-staging ones on success).

> It seems if we want an "always releasable" set of branches, we need
something
> along these lines.

Agreed as far as having staging branches vetoed by CI goes. Less sure about
the edit-commit-in-place part as said above.

> I certainly break tests by mistake, or the build itself, with alarming
regularity.

I agree running tests is painful, but at least for the build, this should be
the responsibility of the committer to build before merging. We all forget
it from
times to times and that's ok, but it's not ok if it's "alarmingly regular".

--
Sylvain


Re: Staging Branches

2015-05-07 Thread Aleksey Yeschenko
> Agreed as far as having staging branches vetoed by CI goes. Less sure about 
> the edit-commit-in-place part as said above.

Right. That’s just an implementation detail (in place vs. extra fix up 
commits). The latter is less annoying for the team in general, let’s do that 
instead.

> If we do this, we should really automate that last part (have the CI 
> environment merge the staging branch to the non-staging ones on success).

This would be nice, but would require a bot with commit access, wouldn’t it?

-- 
AY

On May 7, 2015 at 16:12:31, Sylvain Lebresne (sylv...@datastax.com) wrote:

> If one of them breaks, we go and edit the _staging branch in place to  
correct  
> the problem, and let CI run again.  

I would strongly advise against *in place* edits. If we do it, we'll end up  
in  
weird situations which will be annoying for everyone. Editing commits that  
have  
been shared is almost always a bad idea and that's especially true for  
branch  
that will have some amount of concurrency like those staging branches.  

Even if such problems are rare, better to avoid them in the first place by  
simply  
commit new "fixup" commits as we currently do. Granted this give you a  
slightly  
less clean history but to the best of my knowledge, this hasn't been a pain  
point so far.  

> wait for CI; all clear  
>  
> git checkout cassandra-2.0; git merge cassandra-2.0_staging  
> git checkout cassandra-2.1; git merge cassandra-2.1_staging  
> git checkout trunk; git merge trunk_staging  
>  
> This does introduce some extra steps to the merge process  

If we do this, we should really automate that last part (have the CI  
environment merge the staging branch to the non-staging ones on success).  

> It seems if we want an "always releasable" set of branches, we need  
something  
> along these lines.  

Agreed as far as having staging branches vetoed by CI goes. Less sure about  
the edit-commit-in-place part as said above.  

> I certainly break tests by mistake, or the build itself, with alarming  
regularity.  

I agree running tests is painful, but at least for the build, this should be  
the responsibility of the committer to build before merging. We all forget  
it from  
times to times and that's ok, but it's not ok if it's "alarmingly regular".  

--  
Sylvain  


Re: Staging Branches

2015-05-07 Thread Jake Luciani
The is still the problem of a commit coming into the staging dir after
a previous commit that is being tested.
When the first commit is merged to the stable branch it will include
both tested and untested version.

Let's not take releasable branches to literally, we still need to tag
and test every "release" if there is a bad merge the CI will catch it
just like the in your proposal

On Thu, May 7, 2015 at 5:05 AM, Benedict Elliott Smith
 wrote:
> A good practice as a committer applying a patch is to build and run the
> unit tests before updating the main repository, but to do this for every
> branch is infeasible and impacts local productivity. Alternatively,
> uploading the result to your development tree and waiting a few hours for
> CI to validate it is likely to result in a painful cycle of race-to-merge
> conflicts, rebasing and waiting again for the tests to run.
>
> So I would like to propose a new strategy: staging branches.
>
> Every major branch would have a parallel branch:
>
> cassandra-2.0 <- cassandra-2.0_staging
> cassandra-2.1 <- cassandra-2.1_staging
> trunk <- trunk_staging
>
> On commit, the idea would be to perform the normal merge process on the
> _staging branches only. CI would then run on every single git ref, and as
> these passed we would fast forward the main branch to the latest validated
> staging git ref. If one of them breaks, we go and edit the _staging branch
> in place to correct the problem, and let CI run again.
>
> So, a commit would look something like:
>
> patch -> cassandra-2.0_staging -> cassandra-2.1_staging -> trunk_staging
>
> wait for CI, see 2.0, 2.1 are fine but trunk is failing, so
>
> git rebase -i trunk_staging 
> fix the problem
> git rebase --continue
>
> wait for CI; all clear
>
> git checkout cassandra-2.0; git merge cassandra-2.0_staging
> git checkout cassandra-2.1; git merge cassandra-2.1_staging
> git checkout trunk; git merge trunk_staging
>
> This does introduce some extra steps to the merge process, and we will have
> branches we edit the history of, but the amount of edited history will be
> limited, and this will remain isolated from the main branches. I'm not sure
> how averse to this people are. An alternative policy might be to enforce
> that we merge locally and push to our development branches then await CI
> approval before merging. We might only require this to be repeated if there
> was a new merge conflict on final commit that could not automatically be
> resolved (although auto-merge can break stuff too).
>
> Thoughts? It seems if we want an "always releasable" set of branches, we
> need something along these lines. I certainly break tests by mistake, or
> the build itself, with alarming regularity. Fixing with merges leaves a
> confusing git history, and leaves the build broken for everyone else in the
> meantime, so patches applied after, and development branches based on top,
> aren't sure if they broke anything themselves.



-- 
http://twitter.com/tjake


Re: Staging Branches

2015-05-07 Thread Aleksey Yeschenko
> The is still the problem of a commit coming into the staging dir after 
> a previous commit that is being tested. 
> When the first commit is merged to the stable branch it will include 
> both tested and untested version. 

How so? We’ll only be merging up to the latest tested ref into the stable 
branch, not all of staging.

-- 
AY

On May 7, 2015 at 16:21:54, Jake Luciani (jak...@gmail.com) wrote:

The is still the problem of a commit coming into the staging dir after  
a previous commit that is being tested.  
When the first commit is merged to the stable branch it will include  
both tested and untested version.  

Let's not take releasable branches to literally, we still need to tag  
and test every "release" if there is a bad merge the CI will catch it  
just like the in your proposal  

On Thu, May 7, 2015 at 5:05 AM, Benedict Elliott Smith  
 wrote:  
> A good practice as a committer applying a patch is to build and run the  
> unit tests before updating the main repository, but to do this for every  
> branch is infeasible and impacts local productivity. Alternatively,  
> uploading the result to your development tree and waiting a few hours for  
> CI to validate it is likely to result in a painful cycle of race-to-merge  
> conflicts, rebasing and waiting again for the tests to run.  
>  
> So I would like to propose a new strategy: staging branches.  
>  
> Every major branch would have a parallel branch:  
>  
> cassandra-2.0 <- cassandra-2.0_staging  
> cassandra-2.1 <- cassandra-2.1_staging  
> trunk <- trunk_staging  
>  
> On commit, the idea would be to perform the normal merge process on the  
> _staging branches only. CI would then run on every single git ref, and as  
> these passed we would fast forward the main branch to the latest validated  
> staging git ref. If one of them breaks, we go and edit the _staging branch  
> in place to correct the problem, and let CI run again.  
>  
> So, a commit would look something like:  
>  
> patch -> cassandra-2.0_staging -> cassandra-2.1_staging -> trunk_staging  
>  
> wait for CI, see 2.0, 2.1 are fine but trunk is failing, so  
>  
> git rebase -i trunk_staging   
> fix the problem  
> git rebase --continue  
>  
> wait for CI; all clear  
>  
> git checkout cassandra-2.0; git merge cassandra-2.0_staging  
> git checkout cassandra-2.1; git merge cassandra-2.1_staging  
> git checkout trunk; git merge trunk_staging  
>  
> This does introduce some extra steps to the merge process, and we will have  
> branches we edit the history of, but the amount of edited history will be  
> limited, and this will remain isolated from the main branches. I'm not sure  
> how averse to this people are. An alternative policy might be to enforce  
> that we merge locally and push to our development branches then await CI  
> approval before merging. We might only require this to be repeated if there  
> was a new merge conflict on final commit that could not automatically be  
> resolved (although auto-merge can break stuff too).  
>  
> Thoughts? It seems if we want an "always releasable" set of branches, we  
> need something along these lines. I certainly break tests by mistake, or  
> the build itself, with alarming regularity. Fixing with merges leaves a  
> confusing git history, and leaves the build broken for everyone else in the  
> meantime, so patches applied after, and development branches based on top,  
> aren't sure if they broke anything themselves.  



--  
http://twitter.com/tjake  


Re: Staging Branches

2015-05-07 Thread Jake Luciani
ah missed that. I still think this is overly complicated. We still
need to run CI before we release. So what does this buy us?

On Thu, May 7, 2015 at 9:24 AM, Aleksey Yeschenko  wrote:
>> The is still the problem of a commit coming into the staging dir after
>> a previous commit that is being tested.
>> When the first commit is merged to the stable branch it will include
>> both tested and untested version.
>
> How so? We’ll only be merging up to the latest tested ref into the stable 
> branch, not all of staging.
>
> --
> AY
>
> On May 7, 2015 at 16:21:54, Jake Luciani (jak...@gmail.com) wrote:
>
> The is still the problem of a commit coming into the staging dir after
> a previous commit that is being tested.
> When the first commit is merged to the stable branch it will include
> both tested and untested version.
>
> Let's not take releasable branches to literally, we still need to tag
> and test every "release" if there is a bad merge the CI will catch it
> just like the in your proposal
>
> On Thu, May 7, 2015 at 5:05 AM, Benedict Elliott Smith
>  wrote:
>> A good practice as a committer applying a patch is to build and run the
>> unit tests before updating the main repository, but to do this for every
>> branch is infeasible and impacts local productivity. Alternatively,
>> uploading the result to your development tree and waiting a few hours for
>> CI to validate it is likely to result in a painful cycle of race-to-merge
>> conflicts, rebasing and waiting again for the tests to run.
>>
>> So I would like to propose a new strategy: staging branches.
>>
>> Every major branch would have a parallel branch:
>>
>> cassandra-2.0 <- cassandra-2.0_staging
>> cassandra-2.1 <- cassandra-2.1_staging
>> trunk <- trunk_staging
>>
>> On commit, the idea would be to perform the normal merge process on the
>> _staging branches only. CI would then run on every single git ref, and as
>> these passed we would fast forward the main branch to the latest validated
>> staging git ref. If one of them breaks, we go and edit the _staging branch
>> in place to correct the problem, and let CI run again.
>>
>> So, a commit would look something like:
>>
>> patch -> cassandra-2.0_staging -> cassandra-2.1_staging -> trunk_staging
>>
>> wait for CI, see 2.0, 2.1 are fine but trunk is failing, so
>>
>> git rebase -i trunk_staging 
>> fix the problem
>> git rebase --continue
>>
>> wait for CI; all clear
>>
>> git checkout cassandra-2.0; git merge cassandra-2.0_staging
>> git checkout cassandra-2.1; git merge cassandra-2.1_staging
>> git checkout trunk; git merge trunk_staging
>>
>> This does introduce some extra steps to the merge process, and we will have
>> branches we edit the history of, but the amount of edited history will be
>> limited, and this will remain isolated from the main branches. I'm not sure
>> how averse to this people are. An alternative policy might be to enforce
>> that we merge locally and push to our development branches then await CI
>> approval before merging. We might only require this to be repeated if there
>> was a new merge conflict on final commit that could not automatically be
>> resolved (although auto-merge can break stuff too).
>>
>> Thoughts? It seems if we want an "always releasable" set of branches, we
>> need something along these lines. I certainly break tests by mistake, or
>> the build itself, with alarming regularity. Fixing with merges leaves a
>> confusing git history, and leaves the build broken for everyone else in the
>> meantime, so patches applied after, and development branches based on top,
>> aren't sure if they broke anything themselves.
>
>
>
> --
> http://twitter.com/tjake



-- 
http://twitter.com/tjake


Re: Staging Branches

2015-05-07 Thread Benedict Elliott Smith
>
> If we do it, we'll end up in weird situations which will be annoying for
> everyone


Such as? I'm not disputing, but if we're to assess the relative
strengths/weaknesses, we need to have specifics to discuss.

If we do go with this suggestion, we will most likely want to enable a
shared git rerere cache, so that rebasing is not painful when there are
future commits.

If instead we go with "repairing" commits, we cannot have a "queue" of
things to merge up to. Say you have a string of commits waiting for
approval C1 to C4; you made C1, and it broke something. You introduce C5 to
fix it, but the tests are still broken. Did you not really fix it? Or
perhaps one of C2 to C4 are to blame, but which? And have you accidentally
broken *them* with your commit? Who knows. Either way, we definitely cannot
fast forward. At the very best we can hope that the new merge did not
conflict or mess up the other people's C2 to C4 commits, and they have to
now merge on top. But what if another merge comes in, C6, in the meantime;
and C2 really did also break the tests in some way; how do we determine C2
was to blame, and not C6, or C3 or C4? What do the committers for each of
these do? We end up in a lengthy tussle, and aren't able to commit any of
these to the mainline until all of them are resolved. Really we have to
prevent any merges to the staging repository until the mistakes are fixed.
Since our races in these scenario are the length of time taken for cassci
to vet them, these problems are much more likely than current race to
commit.

In the scheme I propose, in this scenario, the person who broke the build
rebases everyone's branches to his now fixed commit, and the next broken
commit gets blamed, and all other commits being merged in on top can go in
smoothly. The only pain point I can think of is the multi-branch rebase,
but this is solved by git rerere.

I agree running tests is painful, but at least for the build, this should
> be the responsibility of the committer to build before merging


Why make the distinction if we're going to have staging commits? It's a bit
of a waste of time to run three ant real-clean && ant tasks, and increases
the race window for merging (which is painful whether or not involves a
rebase), and it is not a *typical* occurrence ("alarming" is subjective)

On Thu, May 7, 2015 at 2:12 PM, Sylvain Lebresne 
wrote:

> > If one of them breaks, we go and edit the _staging branch in place to
> correct
> > the problem, and let CI run again.
>
> I would strongly advise against *in place* edits. If we do it, we'll end up
> in
> weird situations which will be annoying for everyone. Editing commits that
> have
> been shared is almost always a bad idea and that's especially true for
> branch
> that will have some amount of concurrency like those staging branches.
>
> Even if such problems are rare, better to avoid them in the first place by
> simply
> commit new "fixup" commits as we currently do. Granted this give you a
> slightly
> less clean history but to the best of my knowledge, this hasn't been a pain
> point so far.
>
> > wait for CI; all clear
> >
> > git checkout cassandra-2.0; git merge cassandra-2.0_staging
> > git checkout cassandra-2.1; git merge cassandra-2.1_staging
> > git checkout trunk; git merge trunk_staging
> >
> > This does introduce some extra steps to the merge process
>
> If we do this, we should really automate that last part (have the CI
> environment merge the staging branch to the non-staging ones on success).
>
> > It seems if we want an "always releasable" set of branches, we need
> something
> > along these lines.
>
> Agreed as far as having staging branches vetoed by CI goes. Less sure about
> the edit-commit-in-place part as said above.
>
> > I certainly break tests by mistake, or the build itself, with alarming
> regularity.
>
> I agree running tests is painful, but at least for the build, this should
> be
> the responsibility of the committer to build before merging. We all forget
> it from
> times to times and that's ok, but it's not ok if it's "alarmingly regular".
>
> --
> Sylvain
>


Re: Staging Branches

2015-05-07 Thread Jake Luciani
git rebase -i trunk_staging 
fix the problem
git rebase --continue

In this situation, if there was an untested follow on commit wouldn't
you need to force push?

On Thu, May 7, 2015 at 9:28 AM, Benedict Elliott Smith
 wrote:
>>
>> If we do it, we'll end up in weird situations which will be annoying for
>> everyone
>
>
> Such as? I'm not disputing, but if we're to assess the relative
> strengths/weaknesses, we need to have specifics to discuss.
>
> If we do go with this suggestion, we will most likely want to enable a
> shared git rerere cache, so that rebasing is not painful when there are
> future commits.
>
> If instead we go with "repairing" commits, we cannot have a "queue" of
> things to merge up to. Say you have a string of commits waiting for
> approval C1 to C4; you made C1, and it broke something. You introduce C5 to
> fix it, but the tests are still broken. Did you not really fix it? Or
> perhaps one of C2 to C4 are to blame, but which? And have you accidentally
> broken *them* with your commit? Who knows. Either way, we definitely cannot
> fast forward. At the very best we can hope that the new merge did not
> conflict or mess up the other people's C2 to C4 commits, and they have to
> now merge on top. But what if another merge comes in, C6, in the meantime;
> and C2 really did also break the tests in some way; how do we determine C2
> was to blame, and not C6, or C3 or C4? What do the committers for each of
> these do? We end up in a lengthy tussle, and aren't able to commit any of
> these to the mainline until all of them are resolved. Really we have to
> prevent any merges to the staging repository until the mistakes are fixed.
> Since our races in these scenario are the length of time taken for cassci
> to vet them, these problems are much more likely than current race to
> commit.
>
> In the scheme I propose, in this scenario, the person who broke the build
> rebases everyone's branches to his now fixed commit, and the next broken
> commit gets blamed, and all other commits being merged in on top can go in
> smoothly. The only pain point I can think of is the multi-branch rebase,
> but this is solved by git rerere.
>
> I agree running tests is painful, but at least for the build, this should
>> be the responsibility of the committer to build before merging
>
>
> Why make the distinction if we're going to have staging commits? It's a bit
> of a waste of time to run three ant real-clean && ant tasks, and increases
> the race window for merging (which is painful whether or not involves a
> rebase), and it is not a *typical* occurrence ("alarming" is subjective)
>
> On Thu, May 7, 2015 at 2:12 PM, Sylvain Lebresne 
> wrote:
>
>> > If one of them breaks, we go and edit the _staging branch in place to
>> correct
>> > the problem, and let CI run again.
>>
>> I would strongly advise against *in place* edits. If we do it, we'll end up
>> in
>> weird situations which will be annoying for everyone. Editing commits that
>> have
>> been shared is almost always a bad idea and that's especially true for
>> branch
>> that will have some amount of concurrency like those staging branches.
>>
>> Even if such problems are rare, better to avoid them in the first place by
>> simply
>> commit new "fixup" commits as we currently do. Granted this give you a
>> slightly
>> less clean history but to the best of my knowledge, this hasn't been a pain
>> point so far.
>>
>> > wait for CI; all clear
>> >
>> > git checkout cassandra-2.0; git merge cassandra-2.0_staging
>> > git checkout cassandra-2.1; git merge cassandra-2.1_staging
>> > git checkout trunk; git merge trunk_staging
>> >
>> > This does introduce some extra steps to the merge process
>>
>> If we do this, we should really automate that last part (have the CI
>> environment merge the staging branch to the non-staging ones on success).
>>
>> > It seems if we want an "always releasable" set of branches, we need
>> something
>> > along these lines.
>>
>> Agreed as far as having staging branches vetoed by CI goes. Less sure about
>> the edit-commit-in-place part as said above.
>>
>> > I certainly break tests by mistake, or the build itself, with alarming
>> regularity.
>>
>> I agree running tests is painful, but at least for the build, this should
>> be
>> the responsibility of the committer to build before merging. We all forget
>> it from
>> times to times and that's ok, but it's not ok if it's "alarmingly regular".
>>
>> --
>> Sylvain
>>



-- 
http://twitter.com/tjake


Re: Staging Branches

2015-05-07 Thread Josh McKenzie
>
> I still think this is overly complicated. We still
> need to run CI before we release. So what does this buy us?


I second this line of questioning. This sounds like we have a solution
looking for a problem; we're not talking about people git cloning our repo
and running it in production.

On Thu, May 7, 2015 at 8:48 AM, Jake Luciani  wrote:

> git rebase -i trunk_staging 
> fix the problem
> git rebase --continue
>
> In this situation, if there was an untested follow on commit wouldn't
> you need to force push?
>
> On Thu, May 7, 2015 at 9:28 AM, Benedict Elliott Smith
>  wrote:
> >>
> >> If we do it, we'll end up in weird situations which will be annoying for
> >> everyone
> >
> >
> > Such as? I'm not disputing, but if we're to assess the relative
> > strengths/weaknesses, we need to have specifics to discuss.
> >
> > If we do go with this suggestion, we will most likely want to enable a
> > shared git rerere cache, so that rebasing is not painful when there are
> > future commits.
> >
> > If instead we go with "repairing" commits, we cannot have a "queue" of
> > things to merge up to. Say you have a string of commits waiting for
> > approval C1 to C4; you made C1, and it broke something. You introduce C5
> to
> > fix it, but the tests are still broken. Did you not really fix it? Or
> > perhaps one of C2 to C4 are to blame, but which? And have you
> accidentally
> > broken *them* with your commit? Who knows. Either way, we definitely
> cannot
> > fast forward. At the very best we can hope that the new merge did not
> > conflict or mess up the other people's C2 to C4 commits, and they have to
> > now merge on top. But what if another merge comes in, C6, in the
> meantime;
> > and C2 really did also break the tests in some way; how do we determine
> C2
> > was to blame, and not C6, or C3 or C4? What do the committers for each of
> > these do? We end up in a lengthy tussle, and aren't able to commit any of
> > these to the mainline until all of them are resolved. Really we have to
> > prevent any merges to the staging repository until the mistakes are
> fixed.
> > Since our races in these scenario are the length of time taken for cassci
> > to vet them, these problems are much more likely than current race to
> > commit.
> >
> > In the scheme I propose, in this scenario, the person who broke the build
> > rebases everyone's branches to his now fixed commit, and the next broken
> > commit gets blamed, and all other commits being merged in on top can go
> in
> > smoothly. The only pain point I can think of is the multi-branch rebase,
> > but this is solved by git rerere.
> >
> > I agree running tests is painful, but at least for the build, this should
> >> be the responsibility of the committer to build before merging
> >
> >
> > Why make the distinction if we're going to have staging commits? It's a
> bit
> > of a waste of time to run three ant real-clean && ant tasks, and
> increases
> > the race window for merging (which is painful whether or not involves a
> > rebase), and it is not a *typical* occurrence ("alarming" is subjective)
> >
> > On Thu, May 7, 2015 at 2:12 PM, Sylvain Lebresne 
> > wrote:
> >
> >> > If one of them breaks, we go and edit the _staging branch in place to
> >> correct
> >> > the problem, and let CI run again.
> >>
> >> I would strongly advise against *in place* edits. If we do it, we'll
> end up
> >> in
> >> weird situations which will be annoying for everyone. Editing commits
> that
> >> have
> >> been shared is almost always a bad idea and that's especially true for
> >> branch
> >> that will have some amount of concurrency like those staging branches.
> >>
> >> Even if such problems are rare, better to avoid them in the first place
> by
> >> simply
> >> commit new "fixup" commits as we currently do. Granted this give you a
> >> slightly
> >> less clean history but to the best of my knowledge, this hasn't been a
> pain
> >> point so far.
> >>
> >> > wait for CI; all clear
> >> >
> >> > git checkout cassandra-2.0; git merge cassandra-2.0_staging
> >> > git checkout cassandra-2.1; git merge cassandra-2.1_staging
> >> > git checkout trunk; git merge trunk_staging
> >> >
> >> > This does introduce some extra steps to the merge process
> >>
> >> If we do this, we should really automate that last part (have the CI
> >> environment merge the staging branch to the non-staging ones on
> success).
> >>
> >> > It seems if we want an "always releasable" set of branches, we need
> >> something
> >> > along these lines.
> >>
> >> Agreed as far as having staging branches vetoed by CI goes. Less sure
> about
> >> the edit-commit-in-place part as said above.
> >>
> >> > I certainly break tests by mistake, or the build itself, with alarming
> >> regularity.
> >>
> >> I agree running tests is painful, but at least for the build, this
> should
> >> be
> >> the responsibility of the committer to build before merging. We all
> forget
> >> it from
> >> times to times and that's ok, but it's not ok

Re: Staging Branches

2015-05-07 Thread Ariel Weisberg
Hi,

I don't think this is necessary. If you merge with trunk, test, and someone
gets in a head of you just merge up and push to trunk anyways. Most of the
time the changes the other person made will be unrelated and they will
compose fine. If you actually conflict then yeah you test again but this
doesn't happen often.

The goal isn't to have trunk passing every single time it's to have it pass
almost all the time so the test history means something and when it fails
it fails because it's broken by the latest merge.

At this size I don't see the need for a staging branch to prevent trunk
from ever breaking. There is a size where it would be helpful I just don't
think we are there yet.

Ariel

On Thu, May 7, 2015 at 5:05 AM, Benedict Elliott Smith <
belliottsm...@datastax.com> wrote:

> A good practice as a committer applying a patch is to build and run the
> unit tests before updating the main repository, but to do this for every
> branch is infeasible and impacts local productivity. Alternatively,
> uploading the result to your development tree and waiting a few hours for
> CI to validate it is likely to result in a painful cycle of race-to-merge
> conflicts, rebasing and waiting again for the tests to run.
>
> So I would like to propose a new strategy: staging branches.
>
> Every major branch would have a parallel branch:
>
> cassandra-2.0 <- cassandra-2.0_staging
> cassandra-2.1 <- cassandra-2.1_staging
> trunk <- trunk_staging
>
> On commit, the idea would be to perform the normal merge process on the
> _staging branches only. CI would then run on every single git ref, and as
> these passed we would fast forward the main branch to the latest validated
> staging git ref. If one of them breaks, we go and edit the _staging branch
> in place to correct the problem, and let CI run again.
>
> So, a commit would look something like:
>
> patch -> cassandra-2.0_staging -> cassandra-2.1_staging -> trunk_staging
>
> wait for CI, see 2.0, 2.1 are fine but trunk is failing, so
>
> git rebase -i trunk_staging 
> fix the problem
> git rebase --continue
>
> wait for CI; all clear
>
> git checkout cassandra-2.0; git merge cassandra-2.0_staging
> git checkout cassandra-2.1; git merge cassandra-2.1_staging
> git checkout trunk; git merge trunk_staging
>
> This does introduce some extra steps to the merge process, and we will have
> branches we edit the history of, but the amount of edited history will be
> limited, and this will remain isolated from the main branches. I'm not sure
> how averse to this people are. An alternative policy might be to enforce
> that we merge locally and push to our development branches then await CI
> approval before merging. We might only require this to be repeated if there
> was a new merge conflict on final commit that could not automatically be
> resolved (although auto-merge can break stuff too).
>
> Thoughts? It seems if we want an "always releasable" set of branches, we
> need something along these lines. I certainly break tests by mistake, or
> the build itself, with alarming regularity. Fixing with merges leaves a
> confusing git history, and leaves the build broken for everyone else in the
> meantime, so patches applied after, and development branches based on top,
> aren't sure if they broke anything themselves.
>


Re: Staging Branches

2015-05-07 Thread Aleksey Yeschenko
Strictly speaking, the train schedule does demand that trunk, and all other 
branches, must be releasable at all times, whether you like it or not (for the 
record - I *don’t* like it, but here we are).

This, and other annoying things, is what be subscribed to tick-tock vs. 
supported branches experiment.

> We still need to run CI before we release. So what does this buy us?

Ideally (eventually?) we won’t have to run CI, including duration tests, before 
we release, because we’ll never merge anything that hadn’t passed the full 
suit, including duration tests.

That said, perhaps it’s too much change at once. We still have missing pieces 
of infrastructure, and TE is busy with what’s already back-logged. So let’s 
revisit this proposal in a few months, closer to 3.1 or 3.2, maybe?

-- 
AY

On May 7, 2015 at 16:56:07, Ariel Weisberg (ariel.weisb...@datastax.com) wrote:

Hi,  

I don't think this is necessary. If you merge with trunk, test, and someone  
gets in a head of you just merge up and push to trunk anyways. Most of the  
time the changes the other person made will be unrelated and they will  
compose fine. If you actually conflict then yeah you test again but this  
doesn't happen often.  

The goal isn't to have trunk passing every single time it's to have it pass  
almost all the time so the test history means something and when it fails  
it fails because it's broken by the latest merge.  

At this size I don't see the need for a staging branch to prevent trunk  
from ever breaking. There is a size where it would be helpful I just don't  
think we are there yet.  

Ariel  

On Thu, May 7, 2015 at 5:05 AM, Benedict Elliott Smith <  
belliottsm...@datastax.com> wrote:  

> A good practice as a committer applying a patch is to build and run the  
> unit tests before updating the main repository, but to do this for every  
> branch is infeasible and impacts local productivity. Alternatively,  
> uploading the result to your development tree and waiting a few hours for  
> CI to validate it is likely to result in a painful cycle of race-to-merge  
> conflicts, rebasing and waiting again for the tests to run.  
>  
> So I would like to propose a new strategy: staging branches.  
>  
> Every major branch would have a parallel branch:  
>  
> cassandra-2.0 <- cassandra-2.0_staging  
> cassandra-2.1 <- cassandra-2.1_staging  
> trunk <- trunk_staging  
>  
> On commit, the idea would be to perform the normal merge process on the  
> _staging branches only. CI would then run on every single git ref, and as  
> these passed we would fast forward the main branch to the latest validated  
> staging git ref. If one of them breaks, we go and edit the _staging branch  
> in place to correct the problem, and let CI run again.  
>  
> So, a commit would look something like:  
>  
> patch -> cassandra-2.0_staging -> cassandra-2.1_staging -> trunk_staging  
>  
> wait for CI, see 2.0, 2.1 are fine but trunk is failing, so  
>  
> git rebase -i trunk_staging   
> fix the problem  
> git rebase --continue  
>  
> wait for CI; all clear  
>  
> git checkout cassandra-2.0; git merge cassandra-2.0_staging  
> git checkout cassandra-2.1; git merge cassandra-2.1_staging  
> git checkout trunk; git merge trunk_staging  
>  
> This does introduce some extra steps to the merge process, and we will have  
> branches we edit the history of, but the amount of edited history will be  
> limited, and this will remain isolated from the main branches. I'm not sure  
> how averse to this people are. An alternative policy might be to enforce  
> that we merge locally and push to our development branches then await CI  
> approval before merging. We might only require this to be repeated if there  
> was a new merge conflict on final commit that could not automatically be  
> resolved (although auto-merge can break stuff too).  
>  
> Thoughts? It seems if we want an "always releasable" set of branches, we  
> need something along these lines. I certainly break tests by mistake, or  
> the build itself, with alarming regularity. Fixing with merges leaves a  
> confusing git history, and leaves the build broken for everyone else in the  
> meantime, so patches applied after, and development branches based on top,  
> aren't sure if they broke anything themselves.  
>  


Re: Staging Branches

2015-05-07 Thread Ryan McGuire
I'm not sure how I feel about this, on the one hand cleaner trunk is good,
on the other, added complexity leaves more room for error. I'm +0 currently.

 > We still have missing pieces of infrastructure, and TE is busy with
what’s already back-logged. So let’s revisit this proposal in a few months,
closer to 3.1 or 3.2, maybe?

Apart from the auto-committing CI bot suggestion, this proposal is just a
handful of new branches to test. We can add that to CassCI today, no
problem.

On Thu, May 7, 2015 at 10:13 AM, Aleksey Yeschenko 
wrote:

> Strictly speaking, the train schedule does demand that trunk, and all
> other branches, must be releasable at all times, whether you like it or not
> (for the record - I *don’t* like it, but here we are).
>
> This, and other annoying things, is what be subscribed to tick-tock vs.
> supported branches experiment.
>
> > We still need to run CI before we release. So what does this buy us?
>
> Ideally (eventually?) we won’t have to run CI, including duration tests,
> before we release, because we’ll never merge anything that hadn’t passed
> the full suit, including duration tests.
>
> That said, perhaps it’s too much change at once. We still have missing
> pieces of infrastructure, and TE is busy with what’s already back-logged.
> So let’s revisit this proposal in a few months, closer to 3.1 or 3.2, maybe?
>
> --
> AY
>
> On May 7, 2015 at 16:56:07, Ariel Weisberg (ariel.weisb...@datastax.com)
> wrote:
>
> Hi,
>
> I don't think this is necessary. If you merge with trunk, test, and someone
> gets in a head of you just merge up and push to trunk anyways. Most of the
> time the changes the other person made will be unrelated and they will
> compose fine. If you actually conflict then yeah you test again but this
> doesn't happen often.
>
> The goal isn't to have trunk passing every single time it's to have it pass
> almost all the time so the test history means something and when it fails
> it fails because it's broken by the latest merge.
>
> At this size I don't see the need for a staging branch to prevent trunk
> from ever breaking. There is a size where it would be helpful I just don't
> think we are there yet.
>
> Ariel
>
> On Thu, May 7, 2015 at 5:05 AM, Benedict Elliott Smith <
> belliottsm...@datastax.com> wrote:
>
> > A good practice as a committer applying a patch is to build and run the
> > unit tests before updating the main repository, but to do this for every
> > branch is infeasible and impacts local productivity. Alternatively,
> > uploading the result to your development tree and waiting a few hours for
> > CI to validate it is likely to result in a painful cycle of race-to-merge
> > conflicts, rebasing and waiting again for the tests to run.
> >
> > So I would like to propose a new strategy: staging branches.
> >
> > Every major branch would have a parallel branch:
> >
> > cassandra-2.0 <- cassandra-2.0_staging
> > cassandra-2.1 <- cassandra-2.1_staging
> > trunk <- trunk_staging
> >
> > On commit, the idea would be to perform the normal merge process on the
> > _staging branches only. CI would then run on every single git ref, and as
> > these passed we would fast forward the main branch to the latest
> validated
> > staging git ref. If one of them breaks, we go and edit the _staging
> branch
> > in place to correct the problem, and let CI run again.
> >
> > So, a commit would look something like:
> >
> > patch -> cassandra-2.0_staging -> cassandra-2.1_staging -> trunk_staging
> >
> > wait for CI, see 2.0, 2.1 are fine but trunk is failing, so
> >
> > git rebase -i trunk_staging 
> > fix the problem
> > git rebase --continue
> >
> > wait for CI; all clear
> >
> > git checkout cassandra-2.0; git merge cassandra-2.0_staging
> > git checkout cassandra-2.1; git merge cassandra-2.1_staging
> > git checkout trunk; git merge trunk_staging
> >
> > This does introduce some extra steps to the merge process, and we will
> have
> > branches we edit the history of, but the amount of edited history will be
> > limited, and this will remain isolated from the main branches. I'm not
> sure
> > how averse to this people are. An alternative policy might be to enforce
> > that we merge locally and push to our development branches then await CI
> > approval before merging. We might only require this to be repeated if
> there
> > was a new merge conflict on final commit that could not automatically be
> > resolved (although auto-merge can break stuff too).
> >
> > Thoughts? It seems if we want an "always releasable" set of branches, we
> > need something along these lines. I certainly break tests by mistake, or
> > the build itself, with alarming regularity. Fixing with merges leaves a
> > confusing git history, and leaves the build broken for everyone else in
> the
> > meantime, so patches applied after, and development branches based on
> top,
> > aren't sure if they broke anything themselves.
> >
>


Re: Staging Branches

2015-05-07 Thread Ariel Weisberg
Hi,

Whoah. Our process is our own. We don't have to subscribe to any cargo cult
book buying seminar giving process.

And whatever we do we can iterate and change until it works for us and
solves the problems we want solved.

Ariel

On Thu, May 7, 2015 at 10:13 AM, Aleksey Yeschenko 
wrote:

> Strictly speaking, the train schedule does demand that trunk, and all
> other branches, must be releasable at all times, whether you like it or not
> (for the record - I *don’t* like it, but here we are).
>
> This, and other annoying things, is what be subscribed to tick-tock vs.
> supported branches experiment.
>
> > We still need to run CI before we release. So what does this buy us?
>
> Ideally (eventually?) we won’t have to run CI, including duration tests,
> before we release, because we’ll never merge anything that hadn’t passed
> the full suit, including duration tests.
>
> That said, perhaps it’s too much change at once. We still have missing
> pieces of infrastructure, and TE is busy with what’s already back-logged.
> So let’s revisit this proposal in a few months, closer to 3.1 or 3.2, maybe?
>
> --
> AY
>
> On May 7, 2015 at 16:56:07, Ariel Weisberg (ariel.weisb...@datastax.com)
> wrote:
>
> Hi,
>
> I don't think this is necessary. If you merge with trunk, test, and someone
> gets in a head of you just merge up and push to trunk anyways. Most of the
> time the changes the other person made will be unrelated and they will
> compose fine. If you actually conflict then yeah you test again but this
> doesn't happen often.
>
> The goal isn't to have trunk passing every single time it's to have it pass
> almost all the time so the test history means something and when it fails
> it fails because it's broken by the latest merge.
>
> At this size I don't see the need for a staging branch to prevent trunk
> from ever breaking. There is a size where it would be helpful I just don't
> think we are there yet.
>
> Ariel
>
> On Thu, May 7, 2015 at 5:05 AM, Benedict Elliott Smith <
> belliottsm...@datastax.com> wrote:
>
> > A good practice as a committer applying a patch is to build and run the
> > unit tests before updating the main repository, but to do this for every
> > branch is infeasible and impacts local productivity. Alternatively,
> > uploading the result to your development tree and waiting a few hours for
> > CI to validate it is likely to result in a painful cycle of race-to-merge
> > conflicts, rebasing and waiting again for the tests to run.
> >
> > So I would like to propose a new strategy: staging branches.
> >
> > Every major branch would have a parallel branch:
> >
> > cassandra-2.0 <- cassandra-2.0_staging
> > cassandra-2.1 <- cassandra-2.1_staging
> > trunk <- trunk_staging
> >
> > On commit, the idea would be to perform the normal merge process on the
> > _staging branches only. CI would then run on every single git ref, and as
> > these passed we would fast forward the main branch to the latest
> validated
> > staging git ref. If one of them breaks, we go and edit the _staging
> branch
> > in place to correct the problem, and let CI run again.
> >
> > So, a commit would look something like:
> >
> > patch -> cassandra-2.0_staging -> cassandra-2.1_staging -> trunk_staging
> >
> > wait for CI, see 2.0, 2.1 are fine but trunk is failing, so
> >
> > git rebase -i trunk_staging 
> > fix the problem
> > git rebase --continue
> >
> > wait for CI; all clear
> >
> > git checkout cassandra-2.0; git merge cassandra-2.0_staging
> > git checkout cassandra-2.1; git merge cassandra-2.1_staging
> > git checkout trunk; git merge trunk_staging
> >
> > This does introduce some extra steps to the merge process, and we will
> have
> > branches we edit the history of, but the amount of edited history will be
> > limited, and this will remain isolated from the main branches. I'm not
> sure
> > how averse to this people are. An alternative policy might be to enforce
> > that we merge locally and push to our development branches then await CI
> > approval before merging. We might only require this to be repeated if
> there
> > was a new merge conflict on final commit that could not automatically be
> > resolved (although auto-merge can break stuff too).
> >
> > Thoughts? It seems if we want an "always releasable" set of branches, we
> > need something along these lines. I certainly break tests by mistake, or
> > the build itself, with alarming regularity. Fixing with merges leaves a
> > confusing git history, and leaves the build broken for everyone else in
> the
> > meantime, so patches applied after, and development branches based on
> top,
> > aren't sure if they broke anything themselves.
> >
>


Re: Staging Branches

2015-05-07 Thread Jeremiah D Jordan
"Our process is our own" <- always remember this.

> On May 7, 2015, at 9:25 AM, Ariel Weisberg  
> wrote:
> 
> Hi,
> 
> Whoah. Our process is our own. We don't have to subscribe to any cargo cult
> book buying seminar giving process.
> 
> And whatever we do we can iterate and change until it works for us and
> solves the problems we want solved.
> 
> Ariel
> 
> On Thu, May 7, 2015 at 10:13 AM, Aleksey Yeschenko 
> wrote:
> 
>> Strictly speaking, the train schedule does demand that trunk, and all
>> other branches, must be releasable at all times, whether you like it or not
>> (for the record - I *don’t* like it, but here we are).
>> 
>> This, and other annoying things, is what be subscribed to tick-tock vs.
>> supported branches experiment.
>> 
>>> We still need to run CI before we release. So what does this buy us?
>> 
>> Ideally (eventually?) we won’t have to run CI, including duration tests,
>> before we release, because we’ll never merge anything that hadn’t passed
>> the full suit, including duration tests.
>> 
>> That said, perhaps it’s too much change at once. We still have missing
>> pieces of infrastructure, and TE is busy with what’s already back-logged.
>> So let’s revisit this proposal in a few months, closer to 3.1 or 3.2, maybe?
>> 
>> --
>> AY
>> 
>> On May 7, 2015 at 16:56:07, Ariel Weisberg (ariel.weisb...@datastax.com)
>> wrote:
>> 
>> Hi,
>> 
>> I don't think this is necessary. If you merge with trunk, test, and someone
>> gets in a head of you just merge up and push to trunk anyways. Most of the
>> time the changes the other person made will be unrelated and they will
>> compose fine. If you actually conflict then yeah you test again but this
>> doesn't happen often.
>> 
>> The goal isn't to have trunk passing every single time it's to have it pass
>> almost all the time so the test history means something and when it fails
>> it fails because it's broken by the latest merge.
>> 
>> At this size I don't see the need for a staging branch to prevent trunk
>> from ever breaking. There is a size where it would be helpful I just don't
>> think we are there yet.
>> 
>> Ariel
>> 
>> On Thu, May 7, 2015 at 5:05 AM, Benedict Elliott Smith <
>> belliottsm...@datastax.com> wrote:
>> 
>>> A good practice as a committer applying a patch is to build and run the
>>> unit tests before updating the main repository, but to do this for every
>>> branch is infeasible and impacts local productivity. Alternatively,
>>> uploading the result to your development tree and waiting a few hours for
>>> CI to validate it is likely to result in a painful cycle of race-to-merge
>>> conflicts, rebasing and waiting again for the tests to run.
>>> 
>>> So I would like to propose a new strategy: staging branches.
>>> 
>>> Every major branch would have a parallel branch:
>>> 
>>> cassandra-2.0 <- cassandra-2.0_staging
>>> cassandra-2.1 <- cassandra-2.1_staging
>>> trunk <- trunk_staging
>>> 
>>> On commit, the idea would be to perform the normal merge process on the
>>> _staging branches only. CI would then run on every single git ref, and as
>>> these passed we would fast forward the main branch to the latest
>> validated
>>> staging git ref. If one of them breaks, we go and edit the _staging
>> branch
>>> in place to correct the problem, and let CI run again.
>>> 
>>> So, a commit would look something like:
>>> 
>>> patch -> cassandra-2.0_staging -> cassandra-2.1_staging -> trunk_staging
>>> 
>>> wait for CI, see 2.0, 2.1 are fine but trunk is failing, so
>>> 
>>> git rebase -i trunk_staging 
>>> fix the problem
>>> git rebase --continue
>>> 
>>> wait for CI; all clear
>>> 
>>> git checkout cassandra-2.0; git merge cassandra-2.0_staging
>>> git checkout cassandra-2.1; git merge cassandra-2.1_staging
>>> git checkout trunk; git merge trunk_staging
>>> 
>>> This does introduce some extra steps to the merge process, and we will
>> have
>>> branches we edit the history of, but the amount of edited history will be
>>> limited, and this will remain isolated from the main branches. I'm not
>> sure
>>> how averse to this people are. An alternative policy might be to enforce
>>> that we merge locally and push to our development branches then await CI
>>> approval before merging. We might only require this to be repeated if
>> there
>>> was a new merge conflict on final commit that could not automatically be
>>> resolved (although auto-merge can break stuff too).
>>> 
>>> Thoughts? It seems if we want an "always releasable" set of branches, we
>>> need something along these lines. I certainly break tests by mistake, or
>>> the build itself, with alarming regularity. Fixing with merges leaves a
>>> confusing git history, and leaves the build broken for everyone else in
>> the
>>> meantime, so patches applied after, and development branches based on
>> top,
>>> aren't sure if they broke anything themselves.
>>> 
>> 



Re: Staging Branches

2015-05-07 Thread Benedict Elliott Smith
>
> wouldn't you need to force push?


git push --force-with-lease

This works essentially like CAS; if the remote repositories are not the
same as the one you have modified, it will fail. You then fetch and repair
your local version and try again.

So what does this buy us?


This buys us a clean development process. We bought into "always
releasable". It's already a tall order; if we start weakening the
constraints before we even get started, I am unconvinced we will
successfully deliver. A monthly release cycle requires *strict* processes,
not *almost* strict, or strict*ish*.

Something that could also help make a more streamlined process: if actual
commits were constructed on development branches ready for commit, with a
proper commit message and CHANGES.txt updated. Even more ideally: with git
rerere data for merging up to each of the branches. If we had that, and
each of the branches had been tested in CI, we would be much closer than we
are currently, as the risk-at-commit is minimized.

On Thu, May 7, 2015 at 2:48 PM, Jake Luciani  wrote:

> git rebase -i trunk_staging 
> fix the problem
> git rebase --continue
>
> In this situation, if there was an untested follow on commit wouldn't
> you need to force push?
>
> On Thu, May 7, 2015 at 9:28 AM, Benedict Elliott Smith
>  wrote:
> >>
> >> If we do it, we'll end up in weird situations which will be annoying for
> >> everyone
> >
> >
> > Such as? I'm not disputing, but if we're to assess the relative
> > strengths/weaknesses, we need to have specifics to discuss.
> >
> > If we do go with this suggestion, we will most likely want to enable a
> > shared git rerere cache, so that rebasing is not painful when there are
> > future commits.
> >
> > If instead we go with "repairing" commits, we cannot have a "queue" of
> > things to merge up to. Say you have a string of commits waiting for
> > approval C1 to C4; you made C1, and it broke something. You introduce C5
> to
> > fix it, but the tests are still broken. Did you not really fix it? Or
> > perhaps one of C2 to C4 are to blame, but which? And have you
> accidentally
> > broken *them* with your commit? Who knows. Either way, we definitely
> cannot
> > fast forward. At the very best we can hope that the new merge did not
> > conflict or mess up the other people's C2 to C4 commits, and they have to
> > now merge on top. But what if another merge comes in, C6, in the
> meantime;
> > and C2 really did also break the tests in some way; how do we determine
> C2
> > was to blame, and not C6, or C3 or C4? What do the committers for each of
> > these do? We end up in a lengthy tussle, and aren't able to commit any of
> > these to the mainline until all of them are resolved. Really we have to
> > prevent any merges to the staging repository until the mistakes are
> fixed.
> > Since our races in these scenario are the length of time taken for cassci
> > to vet them, these problems are much more likely than current race to
> > commit.
> >
> > In the scheme I propose, in this scenario, the person who broke the build
> > rebases everyone's branches to his now fixed commit, and the next broken
> > commit gets blamed, and all other commits being merged in on top can go
> in
> > smoothly. The only pain point I can think of is the multi-branch rebase,
> > but this is solved by git rerere.
> >
> > I agree running tests is painful, but at least for the build, this should
> >> be the responsibility of the committer to build before merging
> >
> >
> > Why make the distinction if we're going to have staging commits? It's a
> bit
> > of a waste of time to run three ant real-clean && ant tasks, and
> increases
> > the race window for merging (which is painful whether or not involves a
> > rebase), and it is not a *typical* occurrence ("alarming" is subjective)
> >
> > On Thu, May 7, 2015 at 2:12 PM, Sylvain Lebresne 
> > wrote:
> >
> >> > If one of them breaks, we go and edit the _staging branch in place to
> >> correct
> >> > the problem, and let CI run again.
> >>
> >> I would strongly advise against *in place* edits. If we do it, we'll
> end up
> >> in
> >> weird situations which will be annoying for everyone. Editing commits
> that
> >> have
> >> been shared is almost always a bad idea and that's especially true for
> >> branch
> >> that will have some amount of concurrency like those staging branches.
> >>
> >> Even if such problems are rare, better to avoid them in the first place
> by
> >> simply
> >> commit new "fixup" commits as we currently do. Granted this give you a
> >> slightly
> >> less clean history but to the best of my knowledge, this hasn't been a
> pain
> >> point so far.
> >>
> >> > wait for CI; all clear
> >> >
> >> > git checkout cassandra-2.0; git merge cassandra-2.0_staging
> >> > git checkout cassandra-2.1; git merge cassandra-2.1_staging
> >> > git checkout trunk; git merge trunk_staging
> >> >
> >> > This does introduce some extra steps to the merge process
> >>
> >> If we do this, we should really au

Re: Staging Branches

2015-05-07 Thread Benedict Elliott Smith
It's a bit unfair to characterize Aleksey as subscribing to a cargo cult.
*We* agreed to define the new release process as "keeping trunk always
releasable".

Your own words that catalyzed this: "If we release off trunk it is pretty
much necessary for trunk to be in a releasable state all the time"

It is possible we have been imprecise in our discussions, and people have
agreed to different things. But it does seem to me we agreed to the
position Aleksey is taking, and he is not blindly following some other
process that is not ours.

On Thu, May 7, 2015 at 3:25 PM, Ariel Weisberg 
wrote:

> Hi,
>
> Whoah. Our process is our own. We don't have to subscribe to any cargo cult
> book buying seminar giving process.
>
> And whatever we do we can iterate and change until it works for us and
> solves the problems we want solved.
>
> Ariel
>
> On Thu, May 7, 2015 at 10:13 AM, Aleksey Yeschenko 
> wrote:
>
> > Strictly speaking, the train schedule does demand that trunk, and all
> > other branches, must be releasable at all times, whether you like it or
> not
> > (for the record - I *don’t* like it, but here we are).
> >
> > This, and other annoying things, is what be subscribed to tick-tock vs.
> > supported branches experiment.
> >
> > > We still need to run CI before we release. So what does this buy us?
> >
> > Ideally (eventually?) we won’t have to run CI, including duration tests,
> > before we release, because we’ll never merge anything that hadn’t passed
> > the full suit, including duration tests.
> >
> > That said, perhaps it’s too much change at once. We still have missing
> > pieces of infrastructure, and TE is busy with what’s already back-logged.
> > So let’s revisit this proposal in a few months, closer to 3.1 or 3.2,
> maybe?
> >
> > --
> > AY
> >
> > On May 7, 2015 at 16:56:07, Ariel Weisberg (ariel.weisb...@datastax.com)
> > wrote:
> >
> > Hi,
> >
> > I don't think this is necessary. If you merge with trunk, test, and
> someone
> > gets in a head of you just merge up and push to trunk anyways. Most of
> the
> > time the changes the other person made will be unrelated and they will
> > compose fine. If you actually conflict then yeah you test again but this
> > doesn't happen often.
> >
> > The goal isn't to have trunk passing every single time it's to have it
> pass
> > almost all the time so the test history means something and when it fails
> > it fails because it's broken by the latest merge.
> >
> > At this size I don't see the need for a staging branch to prevent trunk
> > from ever breaking. There is a size where it would be helpful I just
> don't
> > think we are there yet.
> >
> > Ariel
> >
> > On Thu, May 7, 2015 at 5:05 AM, Benedict Elliott Smith <
> > belliottsm...@datastax.com> wrote:
> >
> > > A good practice as a committer applying a patch is to build and run the
> > > unit tests before updating the main repository, but to do this for
> every
> > > branch is infeasible and impacts local productivity. Alternatively,
> > > uploading the result to your development tree and waiting a few hours
> for
> > > CI to validate it is likely to result in a painful cycle of
> race-to-merge
> > > conflicts, rebasing and waiting again for the tests to run.
> > >
> > > So I would like to propose a new strategy: staging branches.
> > >
> > > Every major branch would have a parallel branch:
> > >
> > > cassandra-2.0 <- cassandra-2.0_staging
> > > cassandra-2.1 <- cassandra-2.1_staging
> > > trunk <- trunk_staging
> > >
> > > On commit, the idea would be to perform the normal merge process on the
> > > _staging branches only. CI would then run on every single git ref, and
> as
> > > these passed we would fast forward the main branch to the latest
> > validated
> > > staging git ref. If one of them breaks, we go and edit the _staging
> > branch
> > > in place to correct the problem, and let CI run again.
> > >
> > > So, a commit would look something like:
> > >
> > > patch -> cassandra-2.0_staging -> cassandra-2.1_staging ->
> trunk_staging
> > >
> > > wait for CI, see 2.0, 2.1 are fine but trunk is failing, so
> > >
> > > git rebase -i trunk_staging 
> > > fix the problem
> > > git rebase --continue
> > >
> > > wait for CI; all clear
> > >
> > > git checkout cassandra-2.0; git merge cassandra-2.0_staging
> > > git checkout cassandra-2.1; git merge cassandra-2.1_staging
> > > git checkout trunk; git merge trunk_staging
> > >
> > > This does introduce some extra steps to the merge process, and we will
> > have
> > > branches we edit the history of, but the amount of edited history will
> be
> > > limited, and this will remain isolated from the main branches. I'm not
> > sure
> > > how averse to this people are. An alternative policy might be to
> enforce
> > > that we merge locally and push to our development branches then await
> CI
> > > approval before merging. We might only require this to be repeated if
> > there
> > > was a new merge conflict on final commit that could not automatically
> be
> > > r

Re: Staging Branches

2015-05-07 Thread Jake Luciani
You then fetch and repair
your local version and try again.

This breaks your model of applying every commit ref by ref.

I'm all for trying to avoid extra work/stability but we already have
added a layer of testing every change before commit.  I'm not going to
accept we need to also add a layer of testing before every merge.




On Thu, May 7, 2015 at 10:36 AM, Benedict Elliott Smith
 wrote:
>>
>> wouldn't you need to force push?
>
>
> git push --force-with-lease
>
> This works essentially like CAS; if the remote repositories are not the
> same as the one you have modified, it will fail. You then fetch and repair
> your local version and try again.
>
> So what does this buy us?
>
>
> This buys us a clean development process. We bought into "always
> releasable". It's already a tall order; if we start weakening the
> constraints before we even get started, I am unconvinced we will
> successfully deliver. A monthly release cycle requires *strict* processes,
> not *almost* strict, or strict*ish*.
>
> Something that could also help make a more streamlined process: if actual
> commits were constructed on development branches ready for commit, with a
> proper commit message and CHANGES.txt updated. Even more ideally: with git
> rerere data for merging up to each of the branches. If we had that, and
> each of the branches had been tested in CI, we would be much closer than we
> are currently, as the risk-at-commit is minimized.
>
> On Thu, May 7, 2015 at 2:48 PM, Jake Luciani  wrote:
>
>> git rebase -i trunk_staging 
>> fix the problem
>> git rebase --continue
>>
>> In this situation, if there was an untested follow on commit wouldn't
>> you need to force push?
>>
>> On Thu, May 7, 2015 at 9:28 AM, Benedict Elliott Smith
>>  wrote:
>> >>
>> >> If we do it, we'll end up in weird situations which will be annoying for
>> >> everyone
>> >
>> >
>> > Such as? I'm not disputing, but if we're to assess the relative
>> > strengths/weaknesses, we need to have specifics to discuss.
>> >
>> > If we do go with this suggestion, we will most likely want to enable a
>> > shared git rerere cache, so that rebasing is not painful when there are
>> > future commits.
>> >
>> > If instead we go with "repairing" commits, we cannot have a "queue" of
>> > things to merge up to. Say you have a string of commits waiting for
>> > approval C1 to C4; you made C1, and it broke something. You introduce C5
>> to
>> > fix it, but the tests are still broken. Did you not really fix it? Or
>> > perhaps one of C2 to C4 are to blame, but which? And have you
>> accidentally
>> > broken *them* with your commit? Who knows. Either way, we definitely
>> cannot
>> > fast forward. At the very best we can hope that the new merge did not
>> > conflict or mess up the other people's C2 to C4 commits, and they have to
>> > now merge on top. But what if another merge comes in, C6, in the
>> meantime;
>> > and C2 really did also break the tests in some way; how do we determine
>> C2
>> > was to blame, and not C6, or C3 or C4? What do the committers for each of
>> > these do? We end up in a lengthy tussle, and aren't able to commit any of
>> > these to the mainline until all of them are resolved. Really we have to
>> > prevent any merges to the staging repository until the mistakes are
>> fixed.
>> > Since our races in these scenario are the length of time taken for cassci
>> > to vet them, these problems are much more likely than current race to
>> > commit.
>> >
>> > In the scheme I propose, in this scenario, the person who broke the build
>> > rebases everyone's branches to his now fixed commit, and the next broken
>> > commit gets blamed, and all other commits being merged in on top can go
>> in
>> > smoothly. The only pain point I can think of is the multi-branch rebase,
>> > but this is solved by git rerere.
>> >
>> > I agree running tests is painful, but at least for the build, this should
>> >> be the responsibility of the committer to build before merging
>> >
>> >
>> > Why make the distinction if we're going to have staging commits? It's a
>> bit
>> > of a waste of time to run three ant real-clean && ant tasks, and
>> increases
>> > the race window for merging (which is painful whether or not involves a
>> > rebase), and it is not a *typical* occurrence ("alarming" is subjective)
>> >
>> > On Thu, May 7, 2015 at 2:12 PM, Sylvain Lebresne 
>> > wrote:
>> >
>> >> > If one of them breaks, we go and edit the _staging branch in place to
>> >> correct
>> >> > the problem, and let CI run again.
>> >>
>> >> I would strongly advise against *in place* edits. If we do it, we'll
>> end up
>> >> in
>> >> weird situations which will be annoying for everyone. Editing commits
>> that
>> >> have
>> >> been shared is almost always a bad idea and that's especially true for
>> >> branch
>> >> that will have some amount of concurrency like those staging branches.
>> >>
>> >> Even if such problems are rare, better to avoid them in the first place
>> by
>> >> simply
>> >> commi

Re: Staging Branches

2015-05-07 Thread Benedict Elliott Smith
>
> This breaks your model of applying every commit ref by ref.


How? The rebase only affects commits after the "real" branch, so it still
cleanly fast forwards?

Merging is *hard*. Especially 2.1 -> 3.0, with many breaking API changes
(this is before 8099, which is going to make a *world* of hurt, and will
stick around for a year). It is *very* easy to break things, with even the
utmost care.

On Thu, May 7, 2015 at 3:46 PM, Jake Luciani  wrote:

> You then fetch and repair
> your local version and try again.
>
> This breaks your model of applying every commit ref by ref.
>
> I'm all for trying to avoid extra work/stability but we already have
> added a layer of testing every change before commit.  I'm not going to
> accept we need to also add a layer of testing before every merge.
>
>
>
>
> On Thu, May 7, 2015 at 10:36 AM, Benedict Elliott Smith
>  wrote:
> >>
> >> wouldn't you need to force push?
> >
> >
> > git push --force-with-lease
> >
> > This works essentially like CAS; if the remote repositories are not the
> > same as the one you have modified, it will fail. You then fetch and
> repair
> > your local version and try again.
> >
> > So what does this buy us?
> >
> >
> > This buys us a clean development process. We bought into "always
> > releasable". It's already a tall order; if we start weakening the
> > constraints before we even get started, I am unconvinced we will
> > successfully deliver. A monthly release cycle requires *strict*
> processes,
> > not *almost* strict, or strict*ish*.
> >
> > Something that could also help make a more streamlined process: if actual
> > commits were constructed on development branches ready for commit, with a
> > proper commit message and CHANGES.txt updated. Even more ideally: with
> git
> > rerere data for merging up to each of the branches. If we had that, and
> > each of the branches had been tested in CI, we would be much closer than
> we
> > are currently, as the risk-at-commit is minimized.
> >
> > On Thu, May 7, 2015 at 2:48 PM, Jake Luciani  wrote:
> >
> >> git rebase -i trunk_staging 
> >> fix the problem
> >> git rebase --continue
> >>
> >> In this situation, if there was an untested follow on commit wouldn't
> >> you need to force push?
> >>
> >> On Thu, May 7, 2015 at 9:28 AM, Benedict Elliott Smith
> >>  wrote:
> >> >>
> >> >> If we do it, we'll end up in weird situations which will be annoying
> for
> >> >> everyone
> >> >
> >> >
> >> > Such as? I'm not disputing, but if we're to assess the relative
> >> > strengths/weaknesses, we need to have specifics to discuss.
> >> >
> >> > If we do go with this suggestion, we will most likely want to enable a
> >> > shared git rerere cache, so that rebasing is not painful when there
> are
> >> > future commits.
> >> >
> >> > If instead we go with "repairing" commits, we cannot have a "queue" of
> >> > things to merge up to. Say you have a string of commits waiting for
> >> > approval C1 to C4; you made C1, and it broke something. You introduce
> C5
> >> to
> >> > fix it, but the tests are still broken. Did you not really fix it? Or
> >> > perhaps one of C2 to C4 are to blame, but which? And have you
> >> accidentally
> >> > broken *them* with your commit? Who knows. Either way, we definitely
> >> cannot
> >> > fast forward. At the very best we can hope that the new merge did not
> >> > conflict or mess up the other people's C2 to C4 commits, and they
> have to
> >> > now merge on top. But what if another merge comes in, C6, in the
> >> meantime;
> >> > and C2 really did also break the tests in some way; how do we
> determine
> >> C2
> >> > was to blame, and not C6, or C3 or C4? What do the committers for
> each of
> >> > these do? We end up in a lengthy tussle, and aren't able to commit
> any of
> >> > these to the mainline until all of them are resolved. Really we have
> to
> >> > prevent any merges to the staging repository until the mistakes are
> >> fixed.
> >> > Since our races in these scenario are the length of time taken for
> cassci
> >> > to vet them, these problems are much more likely than current race to
> >> > commit.
> >> >
> >> > In the scheme I propose, in this scenario, the person who broke the
> build
> >> > rebases everyone's branches to his now fixed commit, and the next
> broken
> >> > commit gets blamed, and all other commits being merged in on top can
> go
> >> in
> >> > smoothly. The only pain point I can think of is the multi-branch
> rebase,
> >> > but this is solved by git rerere.
> >> >
> >> > I agree running tests is painful, but at least for the build, this
> should
> >> >> be the responsibility of the committer to build before merging
> >> >
> >> >
> >> > Why make the distinction if we're going to have staging commits? It's
> a
> >> bit
> >> > of a waste of time to run three ant real-clean && ant tasks, and
> >> increases
> >> > the race window for merging (which is painful whether or not involves
> a
> >> > rebase), and it is not a *typical* occurrence ("alarming" is
> sub

Re: Staging Branches

2015-05-07 Thread Ariel Weisberg
Hi,

Sorry didn't mean to blame or come off snarky. I just it is important not
to #include our release process from somewhere else. We don't have to do
anything unless it is necessary to meet some requirement of what we are
trying to do.

So the phrase "Trunk is always releasable" definitely has some wiggle room
because you have to define what your release process is.

If your requirement is that at any time you be able to tag trunk and ship
it within minutes then yes staging branches help solve that problem.

The reality is that the release process always takes low single digit days
because you branch trunk, then wait for longer running automated tests to
run against that branch. If there happens to be a failure you may have to
update the branch, but you have bounded how much brokeness sits between you
and release already. We also don't have a requirement to be able to ship
nigh immediately.

We can balance the cost of extra steps and process against the cost of
having to delay some releases some of the time by a few days and pick
whichever is more important. We are stilly reducing the amount of time it
takes to get a working release. Reduced enough that we should be able to
ship every month without difficulty. I have been on a team roughly our size
that shipped every three weeks without having staging branches. Trunk broke
infrequently enough it wasn't an issue and when it did break it wasn't hard
to address. The real pain point was flapping tests and the diffusion of
responsibility that prevented them from getting fixed.

If I were trying to sell staging branches I would work the angle that I
want to be able to bisect trunk without coming across broken revisions.
Then balance the value of that with the cost of the process.

Ariel

On Thu, May 7, 2015 at 10:41 AM, Benedict Elliott Smith <
belliottsm...@datastax.com> wrote:

> It's a bit unfair to characterize Aleksey as subscribing to a cargo cult.
> *We* agreed to define the new release process as "keeping trunk always
> releasable".
>
> Your own words that catalyzed this: "If we release off trunk it is pretty
> much necessary for trunk to be in a releasable state all the time"
>
> It is possible we have been imprecise in our discussions, and people have
> agreed to different things. But it does seem to me we agreed to the
> position Aleksey is taking, and he is not blindly following some other
> process that is not ours.
>
> On Thu, May 7, 2015 at 3:25 PM, Ariel Weisberg <
> ariel.weisb...@datastax.com>
> wrote:
>
> > Hi,
> >
> > Whoah. Our process is our own. We don't have to subscribe to any cargo
> cult
> > book buying seminar giving process.
> >
> > And whatever we do we can iterate and change until it works for us and
> > solves the problems we want solved.
> >
> > Ariel
> >
> > On Thu, May 7, 2015 at 10:13 AM, Aleksey Yeschenko 
> > wrote:
> >
> > > Strictly speaking, the train schedule does demand that trunk, and all
> > > other branches, must be releasable at all times, whether you like it or
> > not
> > > (for the record - I *don’t* like it, but here we are).
> > >
> > > This, and other annoying things, is what be subscribed to tick-tock vs.
> > > supported branches experiment.
> > >
> > > > We still need to run CI before we release. So what does this buy us?
> > >
> > > Ideally (eventually?) we won’t have to run CI, including duration
> tests,
> > > before we release, because we’ll never merge anything that hadn’t
> passed
> > > the full suit, including duration tests.
> > >
> > > That said, perhaps it’s too much change at once. We still have missing
> > > pieces of infrastructure, and TE is busy with what’s already
> back-logged.
> > > So let’s revisit this proposal in a few months, closer to 3.1 or 3.2,
> > maybe?
> > >
> > > --
> > > AY
> > >
> > > On May 7, 2015 at 16:56:07, Ariel Weisberg (
> ariel.weisb...@datastax.com)
> > > wrote:
> > >
> > > Hi,
> > >
> > > I don't think this is necessary. If you merge with trunk, test, and
> > someone
> > > gets in a head of you just merge up and push to trunk anyways. Most of
> > the
> > > time the changes the other person made will be unrelated and they will
> > > compose fine. If you actually conflict then yeah you test again but
> this
> > > doesn't happen often.
> > >
> > > The goal isn't to have trunk passing every single time it's to have it
> > pass
> > > almost all the time so the test history means something and when it
> fails
> > > it fails because it's broken by the latest merge.
> > >
> > > At this size I don't see the need for a staging branch to prevent trunk
> > > from ever breaking. There is a size where it would be helpful I just
> > don't
> > > think we are there yet.
> > >
> > > Ariel
> > >
> > > On Thu, May 7, 2015 at 5:05 AM, Benedict Elliott Smith <
> > > belliottsm...@datastax.com> wrote:
> > >
> > > > A good practice as a committer applying a patch is to build and run
> the
> > > > unit tests before updating the main repository, but to do this for
> > every
> > > > branch i

Re: Staging Branches

2015-05-07 Thread Jake Luciani
Ok let's focus then on the idea that trunk is releasable.  Releasable
to me doesn't mean it can't contain a bad merge.

It means it doesn't contain some untested and unstable feature.  We
can always "release from trunk" and we still have a release process.

The idea that trunk must contain. a first time it hits the branch,
releasable code is way overboard

On Thu, May 7, 2015 at 10:50 AM, Benedict Elliott Smith
 wrote:
>>
>> This breaks your model of applying every commit ref by ref.
>
>
> How? The rebase only affects commits after the "real" branch, so it still
> cleanly fast forwards?
>
> Merging is *hard*. Especially 2.1 -> 3.0, with many breaking API changes
> (this is before 8099, which is going to make a *world* of hurt, and will
> stick around for a year). It is *very* easy to break things, with even the
> utmost care.
>
> On Thu, May 7, 2015 at 3:46 PM, Jake Luciani  wrote:
>
>> You then fetch and repair
>> your local version and try again.
>>
>> This breaks your model of applying every commit ref by ref.
>>
>> I'm all for trying to avoid extra work/stability but we already have
>> added a layer of testing every change before commit.  I'm not going to
>> accept we need to also add a layer of testing before every merge.
>>
>>
>>
>>
>> On Thu, May 7, 2015 at 10:36 AM, Benedict Elliott Smith
>>  wrote:
>> >>
>> >> wouldn't you need to force push?
>> >
>> >
>> > git push --force-with-lease
>> >
>> > This works essentially like CAS; if the remote repositories are not the
>> > same as the one you have modified, it will fail. You then fetch and
>> repair
>> > your local version and try again.
>> >
>> > So what does this buy us?
>> >
>> >
>> > This buys us a clean development process. We bought into "always
>> > releasable". It's already a tall order; if we start weakening the
>> > constraints before we even get started, I am unconvinced we will
>> > successfully deliver. A monthly release cycle requires *strict*
>> processes,
>> > not *almost* strict, or strict*ish*.
>> >
>> > Something that could also help make a more streamlined process: if actual
>> > commits were constructed on development branches ready for commit, with a
>> > proper commit message and CHANGES.txt updated. Even more ideally: with
>> git
>> > rerere data for merging up to each of the branches. If we had that, and
>> > each of the branches had been tested in CI, we would be much closer than
>> we
>> > are currently, as the risk-at-commit is minimized.
>> >
>> > On Thu, May 7, 2015 at 2:48 PM, Jake Luciani  wrote:
>> >
>> >> git rebase -i trunk_staging 
>> >> fix the problem
>> >> git rebase --continue
>> >>
>> >> In this situation, if there was an untested follow on commit wouldn't
>> >> you need to force push?
>> >>
>> >> On Thu, May 7, 2015 at 9:28 AM, Benedict Elliott Smith
>> >>  wrote:
>> >> >>
>> >> >> If we do it, we'll end up in weird situations which will be annoying
>> for
>> >> >> everyone
>> >> >
>> >> >
>> >> > Such as? I'm not disputing, but if we're to assess the relative
>> >> > strengths/weaknesses, we need to have specifics to discuss.
>> >> >
>> >> > If we do go with this suggestion, we will most likely want to enable a
>> >> > shared git rerere cache, so that rebasing is not painful when there
>> are
>> >> > future commits.
>> >> >
>> >> > If instead we go with "repairing" commits, we cannot have a "queue" of
>> >> > things to merge up to. Say you have a string of commits waiting for
>> >> > approval C1 to C4; you made C1, and it broke something. You introduce
>> C5
>> >> to
>> >> > fix it, but the tests are still broken. Did you not really fix it? Or
>> >> > perhaps one of C2 to C4 are to blame, but which? And have you
>> >> accidentally
>> >> > broken *them* with your commit? Who knows. Either way, we definitely
>> >> cannot
>> >> > fast forward. At the very best we can hope that the new merge did not
>> >> > conflict or mess up the other people's C2 to C4 commits, and they
>> have to
>> >> > now merge on top. But what if another merge comes in, C6, in the
>> >> meantime;
>> >> > and C2 really did also break the tests in some way; how do we
>> determine
>> >> C2
>> >> > was to blame, and not C6, or C3 or C4? What do the committers for
>> each of
>> >> > these do? We end up in a lengthy tussle, and aren't able to commit
>> any of
>> >> > these to the mainline until all of them are resolved. Really we have
>> to
>> >> > prevent any merges to the staging repository until the mistakes are
>> >> fixed.
>> >> > Since our races in these scenario are the length of time taken for
>> cassci
>> >> > to vet them, these problems are much more likely than current race to
>> >> > commit.
>> >> >
>> >> > In the scheme I propose, in this scenario, the person who broke the
>> build
>> >> > rebases everyone's branches to his now fixed commit, and the next
>> broken
>> >> > commit gets blamed, and all other commits being merged in on top can
>> go
>> >> in
>> >> > smoothly. The only pain point I can think of is the multi-branch
>> re

Re: Staging Branches

2015-05-07 Thread Josh McKenzie
>
> Merging is *hard*. Especially 2.1 -> 3.0, with many breaking API changes
> (this is before 8099, which is going to make a *world* of hurt, and will
> stick around for a year). It is *very* easy to break things, with even the
> utmost care.


While I agree re:merging, I'm not convinced the proportion of commits that
will benefit from a staging branch testing pipeline is high enough to
justify the time and complexity overhead to (what I expect are) the vast
majority of commits that are smaller, incremental changes that won't
benefit from this.

On Thu, May 7, 2015 at 9:56 AM, Ariel Weisberg 
wrote:

> Hi,
>
> Sorry didn't mean to blame or come off snarky. I just it is important not
> to #include our release process from somewhere else. We don't have to do
> anything unless it is necessary to meet some requirement of what we are
> trying to do.
>
> So the phrase "Trunk is always releasable" definitely has some wiggle room
> because you have to define what your release process is.
>
> If your requirement is that at any time you be able to tag trunk and ship
> it within minutes then yes staging branches help solve that problem.
>
> The reality is that the release process always takes low single digit days
> because you branch trunk, then wait for longer running automated tests to
> run against that branch. If there happens to be a failure you may have to
> update the branch, but you have bounded how much brokeness sits between you
> and release already. We also don't have a requirement to be able to ship
> nigh immediately.
>
> We can balance the cost of extra steps and process against the cost of
> having to delay some releases some of the time by a few days and pick
> whichever is more important. We are stilly reducing the amount of time it
> takes to get a working release. Reduced enough that we should be able to
> ship every month without difficulty. I have been on a team roughly our size
> that shipped every three weeks without having staging branches. Trunk broke
> infrequently enough it wasn't an issue and when it did break it wasn't hard
> to address. The real pain point was flapping tests and the diffusion of
> responsibility that prevented them from getting fixed.
>
> If I were trying to sell staging branches I would work the angle that I
> want to be able to bisect trunk without coming across broken revisions.
> Then balance the value of that with the cost of the process.
>
> Ariel
>
> On Thu, May 7, 2015 at 10:41 AM, Benedict Elliott Smith <
> belliottsm...@datastax.com> wrote:
>
> > It's a bit unfair to characterize Aleksey as subscribing to a cargo cult.
> > *We* agreed to define the new release process as "keeping trunk always
> > releasable".
> >
> > Your own words that catalyzed this: "If we release off trunk it is pretty
> > much necessary for trunk to be in a releasable state all the time"
> >
> > It is possible we have been imprecise in our discussions, and people have
> > agreed to different things. But it does seem to me we agreed to the
> > position Aleksey is taking, and he is not blindly following some other
> > process that is not ours.
> >
> > On Thu, May 7, 2015 at 3:25 PM, Ariel Weisberg <
> > ariel.weisb...@datastax.com>
> > wrote:
> >
> > > Hi,
> > >
> > > Whoah. Our process is our own. We don't have to subscribe to any cargo
> > cult
> > > book buying seminar giving process.
> > >
> > > And whatever we do we can iterate and change until it works for us and
> > > solves the problems we want solved.
> > >
> > > Ariel
> > >
> > > On Thu, May 7, 2015 at 10:13 AM, Aleksey Yeschenko  >
> > > wrote:
> > >
> > > > Strictly speaking, the train schedule does demand that trunk, and all
> > > > other branches, must be releasable at all times, whether you like it
> or
> > > not
> > > > (for the record - I *don’t* like it, but here we are).
> > > >
> > > > This, and other annoying things, is what be subscribed to tick-tock
> vs.
> > > > supported branches experiment.
> > > >
> > > > > We still need to run CI before we release. So what does this buy
> us?
> > > >
> > > > Ideally (eventually?) we won’t have to run CI, including duration
> > tests,
> > > > before we release, because we’ll never merge anything that hadn’t
> > passed
> > > > the full suit, including duration tests.
> > > >
> > > > That said, perhaps it’s too much change at once. We still have
> missing
> > > > pieces of infrastructure, and TE is busy with what’s already
> > back-logged.
> > > > So let’s revisit this proposal in a few months, closer to 3.1 or 3.2,
> > > maybe?
> > > >
> > > > --
> > > > AY
> > > >
> > > > On May 7, 2015 at 16:56:07, Ariel Weisberg (
> > ariel.weisb...@datastax.com)
> > > > wrote:
> > > >
> > > > Hi,
> > > >
> > > > I don't think this is necessary. If you merge with trunk, test, and
> > > someone
> > > > gets in a head of you just merge up and push to trunk anyways. Most
> of
> > > the
> > > > time the changes the other person made will be unrelated and they
> will
> > > > compose fine. If you ac

Re: Staging Branches

2015-05-07 Thread Benedict Elliott Smith
It's odd, because I honestly think this release process will be easier,
since the stricter we make it the smoother it can become. It requires well
formed commits from everyone, and lets the committers asynchronously
confirm their work, and for it to never be in question *who* needs to fix
something, nor what the effect of their fixing it will be. It means we can,
as Ariel said, perform a bisect and honestly know its result is accurate.
Small commits don't need to worry about fast-forwarding; in fact, nobody
does. It can either be automated, or we can fast forward at a time that
suits us. In which case the process is *the same* as it is currently.

I have no interest in making the commit process harder.


On Thu, May 7, 2015 at 3:59 PM, Jake Luciani  wrote:

> Ok let's focus then on the idea that trunk is releasable.  Releasable
> to me doesn't mean it can't contain a bad merge.
>
> It means it doesn't contain some untested and unstable feature.  We
> can always "release from trunk" and we still have a release process.
>
> The idea that trunk must contain. a first time it hits the branch,
> releasable code is way overboard
>
> On Thu, May 7, 2015 at 10:50 AM, Benedict Elliott Smith
>  wrote:
> >>
> >> This breaks your model of applying every commit ref by ref.
> >
> >
> > How? The rebase only affects commits after the "real" branch, so it still
> > cleanly fast forwards?
> >
> > Merging is *hard*. Especially 2.1 -> 3.0, with many breaking API changes
> > (this is before 8099, which is going to make a *world* of hurt, and will
> > stick around for a year). It is *very* easy to break things, with even
> the
> > utmost care.
> >
> > On Thu, May 7, 2015 at 3:46 PM, Jake Luciani  wrote:
> >
> >> You then fetch and repair
> >> your local version and try again.
> >>
> >> This breaks your model of applying every commit ref by ref.
> >>
> >> I'm all for trying to avoid extra work/stability but we already have
> >> added a layer of testing every change before commit.  I'm not going to
> >> accept we need to also add a layer of testing before every merge.
> >>
> >>
> >>
> >>
> >> On Thu, May 7, 2015 at 10:36 AM, Benedict Elliott Smith
> >>  wrote:
> >> >>
> >> >> wouldn't you need to force push?
> >> >
> >> >
> >> > git push --force-with-lease
> >> >
> >> > This works essentially like CAS; if the remote repositories are not
> the
> >> > same as the one you have modified, it will fail. You then fetch and
> >> repair
> >> > your local version and try again.
> >> >
> >> > So what does this buy us?
> >> >
> >> >
> >> > This buys us a clean development process. We bought into "always
> >> > releasable". It's already a tall order; if we start weakening the
> >> > constraints before we even get started, I am unconvinced we will
> >> > successfully deliver. A monthly release cycle requires *strict*
> >> processes,
> >> > not *almost* strict, or strict*ish*.
> >> >
> >> > Something that could also help make a more streamlined process: if
> actual
> >> > commits were constructed on development branches ready for commit,
> with a
> >> > proper commit message and CHANGES.txt updated. Even more ideally: with
> >> git
> >> > rerere data for merging up to each of the branches. If we had that,
> and
> >> > each of the branches had been tested in CI, we would be much closer
> than
> >> we
> >> > are currently, as the risk-at-commit is minimized.
> >> >
> >> > On Thu, May 7, 2015 at 2:48 PM, Jake Luciani 
> wrote:
> >> >
> >> >> git rebase -i trunk_staging 
> >> >> fix the problem
> >> >> git rebase --continue
> >> >>
> >> >> In this situation, if there was an untested follow on commit wouldn't
> >> >> you need to force push?
> >> >>
> >> >> On Thu, May 7, 2015 at 9:28 AM, Benedict Elliott Smith
> >> >>  wrote:
> >> >> >>
> >> >> >> If we do it, we'll end up in weird situations which will be
> annoying
> >> for
> >> >> >> everyone
> >> >> >
> >> >> >
> >> >> > Such as? I'm not disputing, but if we're to assess the relative
> >> >> > strengths/weaknesses, we need to have specifics to discuss.
> >> >> >
> >> >> > If we do go with this suggestion, we will most likely want to
> enable a
> >> >> > shared git rerere cache, so that rebasing is not painful when there
> >> are
> >> >> > future commits.
> >> >> >
> >> >> > If instead we go with "repairing" commits, we cannot have a
> "queue" of
> >> >> > things to merge up to. Say you have a string of commits waiting for
> >> >> > approval C1 to C4; you made C1, and it broke something. You
> introduce
> >> C5
> >> >> to
> >> >> > fix it, but the tests are still broken. Did you not really fix it?
> Or
> >> >> > perhaps one of C2 to C4 are to blame, but which? And have you
> >> >> accidentally
> >> >> > broken *them* with your commit? Who knows. Either way, we
> definitely
> >> >> cannot
> >> >> > fast forward. At the very best we can hope that the new merge did
> not
> >> >> > conflict or mess up the other people's C2 to C4 commits, and they
> >> have to
> >> >> > now merge on top. But what if an

Re: Staging Branches

2015-05-07 Thread Ariel Weisberg
Hi,

If it were automated I would have no problem with it. That would be less
work for me because the problems detected would occur anyways and have to
be dealt with by me. I just don't want to deal with extra steps and latency
manually.

So who and when is going to implement the automation?

Ariel

On Thu, May 7, 2015 at 11:11 AM, Benedict Elliott Smith <
belliottsm...@datastax.com> wrote:

> It's odd, because I honestly think this release process will be easier,
> since the stricter we make it the smoother it can become. It requires well
> formed commits from everyone, and lets the committers asynchronously
> confirm their work, and for it to never be in question *who* needs to fix
> something, nor what the effect of their fixing it will be. It means we can,
> as Ariel said, perform a bisect and honestly know its result is accurate.
> Small commits don't need to worry about fast-forwarding; in fact, nobody
> does. It can either be automated, or we can fast forward at a time that
> suits us. In which case the process is *the same* as it is currently.
>
> I have no interest in making the commit process harder.
>
>
> On Thu, May 7, 2015 at 3:59 PM, Jake Luciani  wrote:
>
> > Ok let's focus then on the idea that trunk is releasable.  Releasable
> > to me doesn't mean it can't contain a bad merge.
> >
> > It means it doesn't contain some untested and unstable feature.  We
> > can always "release from trunk" and we still have a release process.
> >
> > The idea that trunk must contain. a first time it hits the branch,
> > releasable code is way overboard
> >
> > On Thu, May 7, 2015 at 10:50 AM, Benedict Elliott Smith
> >  wrote:
> > >>
> > >> This breaks your model of applying every commit ref by ref.
> > >
> > >
> > > How? The rebase only affects commits after the "real" branch, so it
> still
> > > cleanly fast forwards?
> > >
> > > Merging is *hard*. Especially 2.1 -> 3.0, with many breaking API
> changes
> > > (this is before 8099, which is going to make a *world* of hurt, and
> will
> > > stick around for a year). It is *very* easy to break things, with even
> > the
> > > utmost care.
> > >
> > > On Thu, May 7, 2015 at 3:46 PM, Jake Luciani  wrote:
> > >
> > >> You then fetch and repair
> > >> your local version and try again.
> > >>
> > >> This breaks your model of applying every commit ref by ref.
> > >>
> > >> I'm all for trying to avoid extra work/stability but we already have
> > >> added a layer of testing every change before commit.  I'm not going to
> > >> accept we need to also add a layer of testing before every merge.
> > >>
> > >>
> > >>
> > >>
> > >> On Thu, May 7, 2015 at 10:36 AM, Benedict Elliott Smith
> > >>  wrote:
> > >> >>
> > >> >> wouldn't you need to force push?
> > >> >
> > >> >
> > >> > git push --force-with-lease
> > >> >
> > >> > This works essentially like CAS; if the remote repositories are not
> > the
> > >> > same as the one you have modified, it will fail. You then fetch and
> > >> repair
> > >> > your local version and try again.
> > >> >
> > >> > So what does this buy us?
> > >> >
> > >> >
> > >> > This buys us a clean development process. We bought into "always
> > >> > releasable". It's already a tall order; if we start weakening the
> > >> > constraints before we even get started, I am unconvinced we will
> > >> > successfully deliver. A monthly release cycle requires *strict*
> > >> processes,
> > >> > not *almost* strict, or strict*ish*.
> > >> >
> > >> > Something that could also help make a more streamlined process: if
> > actual
> > >> > commits were constructed on development branches ready for commit,
> > with a
> > >> > proper commit message and CHANGES.txt updated. Even more ideally:
> with
> > >> git
> > >> > rerere data for merging up to each of the branches. If we had that,
> > and
> > >> > each of the branches had been tested in CI, we would be much closer
> > than
> > >> we
> > >> > are currently, as the risk-at-commit is minimized.
> > >> >
> > >> > On Thu, May 7, 2015 at 2:48 PM, Jake Luciani 
> > wrote:
> > >> >
> > >> >> git rebase -i trunk_staging 
> > >> >> fix the problem
> > >> >> git rebase --continue
> > >> >>
> > >> >> In this situation, if there was an untested follow on commit
> wouldn't
> > >> >> you need to force push?
> > >> >>
> > >> >> On Thu, May 7, 2015 at 9:28 AM, Benedict Elliott Smith
> > >> >>  wrote:
> > >> >> >>
> > >> >> >> If we do it, we'll end up in weird situations which will be
> > annoying
> > >> for
> > >> >> >> everyone
> > >> >> >
> > >> >> >
> > >> >> > Such as? I'm not disputing, but if we're to assess the relative
> > >> >> > strengths/weaknesses, we need to have specifics to discuss.
> > >> >> >
> > >> >> > If we do go with this suggestion, we will most likely want to
> > enable a
> > >> >> > shared git rerere cache, so that rebasing is not painful when
> there
> > >> are
> > >> >> > future commits.
> > >> >> >
> > >> >> > If instead we go with "repairing" commits, we cannot have a
> > "queue" of
> > >> >> > things t

Re: Staging Branches

2015-05-07 Thread Aleksey Yeschenko
I would argue that we must *at least* do the following for now.

If your patch is 2.1-based, you need to create a private git branch for that 
and a merged trunk branch ( and -trunk). And you don’t push anything 
until cassci validates all of those three branches, first.

An issue without a link to cassci for both of those branches passing doesn’t 
qualify as done to me.

That alone will be enough to catch most merge-related regressions.

Going with staging branches would also prevent any issues from concurrent 
pushes, but given the opposition, I’m fine with dropping that requirement, for 
now.

-- 
AY

On May 7, 2015 at 18:04:20, Josh McKenzie (josh.mcken...@datastax.com) wrote:

>  
> Merging is *hard*. Especially 2.1 -> 3.0, with many breaking API changes  
> (this is before 8099, which is going to make a *world* of hurt, and will  
> stick around for a year). It is *very* easy to break things, with even the  
> utmost care.  


While I agree re:merging, I'm not convinced the proportion of commits that  
will benefit from a staging branch testing pipeline is high enough to  
justify the time and complexity overhead to (what I expect are) the vast  
majority of commits that are smaller, incremental changes that won't  
benefit from this.  

On Thu, May 7, 2015 at 9:56 AM, Ariel Weisberg   
wrote:  

> Hi,  
>  
> Sorry didn't mean to blame or come off snarky. I just it is important not  
> to #include our release process from somewhere else. We don't have to do  
> anything unless it is necessary to meet some requirement of what we are  
> trying to do.  
>  
> So the phrase "Trunk is always releasable" definitely has some wiggle room  
> because you have to define what your release process is.  
>  
> If your requirement is that at any time you be able to tag trunk and ship  
> it within minutes then yes staging branches help solve that problem.  
>  
> The reality is that the release process always takes low single digit days  
> because you branch trunk, then wait for longer running automated tests to  
> run against that branch. If there happens to be a failure you may have to  
> update the branch, but you have bounded how much brokeness sits between you  
> and release already. We also don't have a requirement to be able to ship  
> nigh immediately.  
>  
> We can balance the cost of extra steps and process against the cost of  
> having to delay some releases some of the time by a few days and pick  
> whichever is more important. We are stilly reducing the amount of time it  
> takes to get a working release. Reduced enough that we should be able to  
> ship every month without difficulty. I have been on a team roughly our size  
> that shipped every three weeks without having staging branches. Trunk broke  
> infrequently enough it wasn't an issue and when it did break it wasn't hard  
> to address. The real pain point was flapping tests and the diffusion of  
> responsibility that prevented them from getting fixed.  
>  
> If I were trying to sell staging branches I would work the angle that I  
> want to be able to bisect trunk without coming across broken revisions.  
> Then balance the value of that with the cost of the process.  
>  
> Ariel  
>  
> On Thu, May 7, 2015 at 10:41 AM, Benedict Elliott Smith <  
> belliottsm...@datastax.com> wrote:  
>  
> > It's a bit unfair to characterize Aleksey as subscribing to a cargo cult.  
> > *We* agreed to define the new release process as "keeping trunk always  
> > releasable".  
> >  
> > Your own words that catalyzed this: "If we release off trunk it is pretty  
> > much necessary for trunk to be in a releasable state all the time"  
> >  
> > It is possible we have been imprecise in our discussions, and people have  
> > agreed to different things. But it does seem to me we agreed to the  
> > position Aleksey is taking, and he is not blindly following some other  
> > process that is not ours.  
> >  
> > On Thu, May 7, 2015 at 3:25 PM, Ariel Weisberg <  
> > ariel.weisb...@datastax.com>  
> > wrote:  
> >  
> > > Hi,  
> > >  
> > > Whoah. Our process is our own. We don't have to subscribe to any cargo  
> > cult  
> > > book buying seminar giving process.  
> > >  
> > > And whatever we do we can iterate and change until it works for us and  
> > > solves the problems we want solved.  
> > >  
> > > Ariel  
> > >  
> > > On Thu, May 7, 2015 at 10:13 AM, Aleksey Yeschenko  >  
> > > wrote:  
> > >  
> > > > Strictly speaking, the train schedule does demand that trunk, and all  
> > > > other branches, must be releasable at all times, whether you like it  
> or  
> > > not  
> > > > (for the record - I *don’t* like it, but here we are).  
> > > >  
> > > > This, and other annoying things, is what be subscribed to tick-tock  
> vs.  
> > > > supported branches experiment.  
> > > >  
> > > > > We still need to run CI before we release. So what does this buy  
> us?  
> > > >  
> > > > Ideally (eventually?) we won’t have to run CI, inc

Re: Staging Branches

2015-05-07 Thread Josh McKenzie
>
> So who and when is going to implement the automation?


I don't believe we have sufficient consensus that this is necessary to
start doling out action-items for implementation.

On Thu, May 7, 2015 at 10:16 AM, Ariel Weisberg  wrote:

> Hi,
>
> If it were automated I would have no problem with it. That would be less
> work for me because the problems detected would occur anyways and have to
> be dealt with by me. I just don't want to deal with extra steps and latency
> manually.
>
> So who and when is going to implement the automation?
>
> Ariel
>
> On Thu, May 7, 2015 at 11:11 AM, Benedict Elliott Smith <
> belliottsm...@datastax.com> wrote:
>
> > It's odd, because I honestly think this release process will be easier,
> > since the stricter we make it the smoother it can become. It requires
> well
> > formed commits from everyone, and lets the committers asynchronously
> > confirm their work, and for it to never be in question *who* needs to fix
> > something, nor what the effect of their fixing it will be. It means we
> can,
> > as Ariel said, perform a bisect and honestly know its result is accurate.
> > Small commits don't need to worry about fast-forwarding; in fact, nobody
> > does. It can either be automated, or we can fast forward at a time that
> > suits us. In which case the process is *the same* as it is currently.
> >
> > I have no interest in making the commit process harder.
> >
> >
> > On Thu, May 7, 2015 at 3:59 PM, Jake Luciani  wrote:
> >
> > > Ok let's focus then on the idea that trunk is releasable.  Releasable
> > > to me doesn't mean it can't contain a bad merge.
> > >
> > > It means it doesn't contain some untested and unstable feature.  We
> > > can always "release from trunk" and we still have a release process.
> > >
> > > The idea that trunk must contain. a first time it hits the branch,
> > > releasable code is way overboard
> > >
> > > On Thu, May 7, 2015 at 10:50 AM, Benedict Elliott Smith
> > >  wrote:
> > > >>
> > > >> This breaks your model of applying every commit ref by ref.
> > > >
> > > >
> > > > How? The rebase only affects commits after the "real" branch, so it
> > still
> > > > cleanly fast forwards?
> > > >
> > > > Merging is *hard*. Especially 2.1 -> 3.0, with many breaking API
> > changes
> > > > (this is before 8099, which is going to make a *world* of hurt, and
> > will
> > > > stick around for a year). It is *very* easy to break things, with
> even
> > > the
> > > > utmost care.
> > > >
> > > > On Thu, May 7, 2015 at 3:46 PM, Jake Luciani 
> wrote:
> > > >
> > > >> You then fetch and repair
> > > >> your local version and try again.
> > > >>
> > > >> This breaks your model of applying every commit ref by ref.
> > > >>
> > > >> I'm all for trying to avoid extra work/stability but we already have
> > > >> added a layer of testing every change before commit.  I'm not going
> to
> > > >> accept we need to also add a layer of testing before every merge.
> > > >>
> > > >>
> > > >>
> > > >>
> > > >> On Thu, May 7, 2015 at 10:36 AM, Benedict Elliott Smith
> > > >>  wrote:
> > > >> >>
> > > >> >> wouldn't you need to force push?
> > > >> >
> > > >> >
> > > >> > git push --force-with-lease
> > > >> >
> > > >> > This works essentially like CAS; if the remote repositories are
> not
> > > the
> > > >> > same as the one you have modified, it will fail. You then fetch
> and
> > > >> repair
> > > >> > your local version and try again.
> > > >> >
> > > >> > So what does this buy us?
> > > >> >
> > > >> >
> > > >> > This buys us a clean development process. We bought into "always
> > > >> > releasable". It's already a tall order; if we start weakening the
> > > >> > constraints before we even get started, I am unconvinced we will
> > > >> > successfully deliver. A monthly release cycle requires *strict*
> > > >> processes,
> > > >> > not *almost* strict, or strict*ish*.
> > > >> >
> > > >> > Something that could also help make a more streamlined process: if
> > > actual
> > > >> > commits were constructed on development branches ready for commit,
> > > with a
> > > >> > proper commit message and CHANGES.txt updated. Even more ideally:
> > with
> > > >> git
> > > >> > rerere data for merging up to each of the branches. If we had
> that,
> > > and
> > > >> > each of the branches had been tested in CI, we would be much
> closer
> > > than
> > > >> we
> > > >> > are currently, as the risk-at-commit is minimized.
> > > >> >
> > > >> > On Thu, May 7, 2015 at 2:48 PM, Jake Luciani 
> > > wrote:
> > > >> >
> > > >> >> git rebase -i trunk_staging 
> > > >> >> fix the problem
> > > >> >> git rebase --continue
> > > >> >>
> > > >> >> In this situation, if there was an untested follow on commit
> > wouldn't
> > > >> >> you need to force push?
> > > >> >>
> > > >> >> On Thu, May 7, 2015 at 9:28 AM, Benedict Elliott Smith
> > > >> >>  wrote:
> > > >> >> >>
> > > >> >> >> If we do it, we'll end up in weird situations which will be
> > > annoying
> > > >> for
> > > >> >> >> everyone
> > > >

cqlsh client side filtering

2015-05-07 Thread Jens Rantil
Hi,

Are there any plans (or JIRA issue) for adding client-side filtering to
cqlsh? It would hugely improve our experiences with it when debugging etc.
I wouldn't be against adding some kind of auto LIMIT or warning when using
it as I understand users could use it as an anti-pattern, too.

Cheers,
Jens

-- 
Jens Rantil
Backend engineer
Tink AB

Email: jens.ran...@tink.se
Phone: +46 708 84 18 32
Web: www.tink.se

Facebook  Linkedin

 Twitter 


Re: Staging Branches

2015-05-07 Thread Ariel Weisberg
Hi,

I meant in the hypothetical case that we did this. There is going to be an
interim period where we wouldn't have this. The automation comes at the
expense of something else.

Ariel

On Thu, May 7, 2015 at 11:40 AM, Josh McKenzie 
wrote:

> >
> > So who and when is going to implement the automation?
>
>
> I don't believe we have sufficient consensus that this is necessary to
> start doling out action-items for implementation.
>
> On Thu, May 7, 2015 at 10:16 AM, Ariel Weisberg <
> ariel.weisb...@datastax.com
> > wrote:
>
> > Hi,
> >
> > If it were automated I would have no problem with it. That would be less
> > work for me because the problems detected would occur anyways and have to
> > be dealt with by me. I just don't want to deal with extra steps and
> latency
> > manually.
> >
> > So who and when is going to implement the automation?
> >
> > Ariel
> >
> > On Thu, May 7, 2015 at 11:11 AM, Benedict Elliott Smith <
> > belliottsm...@datastax.com> wrote:
> >
> > > It's odd, because I honestly think this release process will be easier,
> > > since the stricter we make it the smoother it can become. It requires
> > well
> > > formed commits from everyone, and lets the committers asynchronously
> > > confirm their work, and for it to never be in question *who* needs to
> fix
> > > something, nor what the effect of their fixing it will be. It means we
> > can,
> > > as Ariel said, perform a bisect and honestly know its result is
> accurate.
> > > Small commits don't need to worry about fast-forwarding; in fact,
> nobody
> > > does. It can either be automated, or we can fast forward at a time that
> > > suits us. In which case the process is *the same* as it is currently.
> > >
> > > I have no interest in making the commit process harder.
> > >
> > >
> > > On Thu, May 7, 2015 at 3:59 PM, Jake Luciani  wrote:
> > >
> > > > Ok let's focus then on the idea that trunk is releasable.  Releasable
> > > > to me doesn't mean it can't contain a bad merge.
> > > >
> > > > It means it doesn't contain some untested and unstable feature.  We
> > > > can always "release from trunk" and we still have a release process.
> > > >
> > > > The idea that trunk must contain. a first time it hits the branch,
> > > > releasable code is way overboard
> > > >
> > > > On Thu, May 7, 2015 at 10:50 AM, Benedict Elliott Smith
> > > >  wrote:
> > > > >>
> > > > >> This breaks your model of applying every commit ref by ref.
> > > > >
> > > > >
> > > > > How? The rebase only affects commits after the "real" branch, so it
> > > still
> > > > > cleanly fast forwards?
> > > > >
> > > > > Merging is *hard*. Especially 2.1 -> 3.0, with many breaking API
> > > changes
> > > > > (this is before 8099, which is going to make a *world* of hurt, and
> > > will
> > > > > stick around for a year). It is *very* easy to break things, with
> > even
> > > > the
> > > > > utmost care.
> > > > >
> > > > > On Thu, May 7, 2015 at 3:46 PM, Jake Luciani 
> > wrote:
> > > > >
> > > > >> You then fetch and repair
> > > > >> your local version and try again.
> > > > >>
> > > > >> This breaks your model of applying every commit ref by ref.
> > > > >>
> > > > >> I'm all for trying to avoid extra work/stability but we already
> have
> > > > >> added a layer of testing every change before commit.  I'm not
> going
> > to
> > > > >> accept we need to also add a layer of testing before every merge.
> > > > >>
> > > > >>
> > > > >>
> > > > >>
> > > > >> On Thu, May 7, 2015 at 10:36 AM, Benedict Elliott Smith
> > > > >>  wrote:
> > > > >> >>
> > > > >> >> wouldn't you need to force push?
> > > > >> >
> > > > >> >
> > > > >> > git push --force-with-lease
> > > > >> >
> > > > >> > This works essentially like CAS; if the remote repositories are
> > not
> > > > the
> > > > >> > same as the one you have modified, it will fail. You then fetch
> > and
> > > > >> repair
> > > > >> > your local version and try again.
> > > > >> >
> > > > >> > So what does this buy us?
> > > > >> >
> > > > >> >
> > > > >> > This buys us a clean development process. We bought into "always
> > > > >> > releasable". It's already a tall order; if we start weakening
> the
> > > > >> > constraints before we even get started, I am unconvinced we will
> > > > >> > successfully deliver. A monthly release cycle requires *strict*
> > > > >> processes,
> > > > >> > not *almost* strict, or strict*ish*.
> > > > >> >
> > > > >> > Something that could also help make a more streamlined process:
> if
> > > > actual
> > > > >> > commits were constructed on development branches ready for
> commit,
> > > > with a
> > > > >> > proper commit message and CHANGES.txt updated. Even more
> ideally:
> > > with
> > > > >> git
> > > > >> > rerere data for merging up to each of the branches. If we had
> > that,
> > > > and
> > > > >> > each of the branches had been tested in CI, we would be much
> > closer
> > > > than
> > > > >> we
> > > > >> > are currently, as the risk-at-commit is minimized.
> > > > >> >
> > > > >> > On Thu,

Re: Staging Branches

2015-05-07 Thread Jonathan Ellis
On Thu, May 7, 2015 at 7:13 AM, Aleksey Yeschenko 
wrote:

>
> That said, perhaps it’s too much change at once. We still have missing
> pieces of infrastructure, and TE is busy with what’s already back-logged.
> So let’s revisit this proposal in a few months, closer to 3.1 or 3.2, maybe?
>

Agreed.  I would like to wait and see how we do without extra branches for
a release or two.  That will give us a better idea of how much pain the
extra steps will protect us from.


Re: Staging Branches

2015-05-07 Thread Benedict Elliott Smith
>
> I would argue that we must *at least* do the following for now.


If we get this right, the extra staging branches can certainly wait to be
assessed until later.

IMO, any patch should have a branch in CI for each affected mainline
branch, and should have the commit completely wired up (CHANGES.txt, commit
message, the works), so that it can be merged straight in. If it conflicts
significantly, it can be bumped back to the author/reviewer to refresh.

On Thu, May 7, 2015 at 4:16 PM, Aleksey Yeschenko 
wrote:

> I would argue that we must *at least* do the following for now.
>
> If your patch is 2.1-based, you need to create a private git branch for
> that and a merged trunk branch ( and -trunk). And you don’t push
> anything until cassci validates all of those three branches, first.
>
> An issue without a link to cassci for both of those branches passing
> doesn’t qualify as done to me.
>
> That alone will be enough to catch most merge-related regressions.
>
> Going with staging branches would also prevent any issues from concurrent
> pushes, but given the opposition, I’m fine with dropping that requirement,
> for now.
>
> --
> AY
>
> On May 7, 2015 at 18:04:20, Josh McKenzie (josh.mcken...@datastax.com)
> wrote:
>
> >
> > Merging is *hard*. Especially 2.1 -> 3.0, with many breaking API changes
> > (this is before 8099, which is going to make a *world* of hurt, and will
> > stick around for a year). It is *very* easy to break things, with even
> the
> > utmost care.
>
>
> While I agree re:merging, I'm not convinced the proportion of commits that
> will benefit from a staging branch testing pipeline is high enough to
> justify the time and complexity overhead to (what I expect are) the vast
> majority of commits that are smaller, incremental changes that won't
> benefit from this.
>
> On Thu, May 7, 2015 at 9:56 AM, Ariel Weisberg <
> ariel.weisb...@datastax.com>
> wrote:
>
> > Hi,
> >
> > Sorry didn't mean to blame or come off snarky. I just it is important not
> > to #include our release process from somewhere else. We don't have to do
> > anything unless it is necessary to meet some requirement of what we are
> > trying to do.
> >
> > So the phrase "Trunk is always releasable" definitely has some wiggle
> room
> > because you have to define what your release process is.
> >
> > If your requirement is that at any time you be able to tag trunk and ship
> > it within minutes then yes staging branches help solve that problem.
> >
> > The reality is that the release process always takes low single digit
> days
> > because you branch trunk, then wait for longer running automated tests to
> > run against that branch. If there happens to be a failure you may have to
> > update the branch, but you have bounded how much brokeness sits between
> you
> > and release already. We also don't have a requirement to be able to ship
> > nigh immediately.
> >
> > We can balance the cost of extra steps and process against the cost of
> > having to delay some releases some of the time by a few days and pick
> > whichever is more important. We are stilly reducing the amount of time it
> > takes to get a working release. Reduced enough that we should be able to
> > ship every month without difficulty. I have been on a team roughly our
> size
> > that shipped every three weeks without having staging branches. Trunk
> broke
> > infrequently enough it wasn't an issue and when it did break it wasn't
> hard
> > to address. The real pain point was flapping tests and the diffusion of
> > responsibility that prevented them from getting fixed.
> >
> > If I were trying to sell staging branches I would work the angle that I
> > want to be able to bisect trunk without coming across broken revisions.
> > Then balance the value of that with the cost of the process.
> >
> > Ariel
> >
> > On Thu, May 7, 2015 at 10:41 AM, Benedict Elliott Smith <
> > belliottsm...@datastax.com> wrote:
> >
> > > It's a bit unfair to characterize Aleksey as subscribing to a cargo
> cult.
> > > *We* agreed to define the new release process as "keeping trunk always
> > > releasable".
> > >
> > > Your own words that catalyzed this: "If we release off trunk it is
> pretty
> > > much necessary for trunk to be in a releasable state all the time"
> > >
> > > It is possible we have been imprecise in our discussions, and people
> have
> > > agreed to different things. But it does seem to me we agreed to the
> > > position Aleksey is taking, and he is not blindly following some other
> > > process that is not ours.
> > >
> > > On Thu, May 7, 2015 at 3:25 PM, Ariel Weisberg <
> > > ariel.weisb...@datastax.com>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > Whoah. Our process is our own. We don't have to subscribe to any
> cargo
> > > cult
> > > > book buying seminar giving process.
> > > >
> > > > And whatever we do we can iterate and change until it works for us
> and
> > > > solves the problems we want solved.
> > > >
> > > > Ariel
> > > >
> > > > On Thu, May 

Requiring Java 8 for C* 3.0

2015-05-07 Thread Jonathan Ellis
We discussed requiring Java 8 previously and decided to remain Java
7-compatible, but at the time we were planning to release 3.0 before Java 7
EOL.  Now that 8099 and increased emphasis on QA have delayed us past Java
7 EOL, I think it's worth reopening this discussion.

If we require 8, then we can use lambdas, LongAdder, StampedLock, Streaming
collections, default methods, etc.  Not just in 3.0 but over 3.x for the
next year.

If we don't, then people can choose whether to deploy on 7 or 8 -- but the
vast majority will deploy on 8 simply because 7 is no longer supported
without a premium contract with Oracle.  8 also has a more advanced G1GC
implementation (see CASSANDRA-7486).

I think that gaining access to the new features in 8 as we develop 3.x is
worth losing the ability to run on a platform that will have been EOL for a
couple months by the time we release.

-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder, http://www.datastax.com
@spyced


Re: Requiring Java 8 for C* 3.0

2015-05-07 Thread Ryan McGuire
+1, from a testing perspective we run dtest and unit tests on hotspot 8 and
openjdk 8 and have seen no problems.

On Thu, May 7, 2015 at 12:09 PM, Jonathan Ellis  wrote:

> We discussed requiring Java 8 previously and decided to remain Java
> 7-compatible, but at the time we were planning to release 3.0 before Java 7
> EOL.  Now that 8099 and increased emphasis on QA have delayed us past Java
> 7 EOL, I think it's worth reopening this discussion.
>
> If we require 8, then we can use lambdas, LongAdder, StampedLock, Streaming
> collections, default methods, etc.  Not just in 3.0 but over 3.x for the
> next year.
>
> If we don't, then people can choose whether to deploy on 7 or 8 -- but the
> vast majority will deploy on 8 simply because 7 is no longer supported
> without a premium contract with Oracle.  8 also has a more advanced G1GC
> implementation (see CASSANDRA-7486).
>
> I think that gaining access to the new features in 8 as we develop 3.x is
> worth losing the ability to run on a platform that will have been EOL for a
> couple months by the time we release.
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder, http://www.datastax.com
> @spyced
>


Re: Requiring Java 8 for C* 3.0

2015-05-07 Thread Jeremiah D Jordan
With Java 7 being EOL for free versions I am +1 on this.  If you want to stick 
with 7, you can always keep running 2.1.

> On May 7, 2015, at 11:09 AM, Jonathan Ellis  wrote:
> 
> We discussed requiring Java 8 previously and decided to remain Java
> 7-compatible, but at the time we were planning to release 3.0 before Java 7
> EOL.  Now that 8099 and increased emphasis on QA have delayed us past Java
> 7 EOL, I think it's worth reopening this discussion.
> 
> If we require 8, then we can use lambdas, LongAdder, StampedLock, Streaming
> collections, default methods, etc.  Not just in 3.0 but over 3.x for the
> next year.
> 
> If we don't, then people can choose whether to deploy on 7 or 8 -- but the
> vast majority will deploy on 8 simply because 7 is no longer supported
> without a premium contract with Oracle.  8 also has a more advanced G1GC
> implementation (see CASSANDRA-7486).
> 
> I think that gaining access to the new features in 8 as we develop 3.x is
> worth losing the ability to run on a platform that will have been EOL for a
> couple months by the time we release.
> 
> -- 
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder, http://www.datastax.com
> @spyced



Re: Requiring Java 8 for C* 3.0

2015-05-07 Thread Yuki Morishita
+1

On Thu, May 7, 2015 at 11:13 AM, Jeremiah D Jordan
 wrote:
> With Java 7 being EOL for free versions I am +1 on this.  If you want to 
> stick with 7, you can always keep running 2.1.
>
>> On May 7, 2015, at 11:09 AM, Jonathan Ellis  wrote:
>>
>> We discussed requiring Java 8 previously and decided to remain Java
>> 7-compatible, but at the time we were planning to release 3.0 before Java 7
>> EOL.  Now that 8099 and increased emphasis on QA have delayed us past Java
>> 7 EOL, I think it's worth reopening this discussion.
>>
>> If we require 8, then we can use lambdas, LongAdder, StampedLock, Streaming
>> collections, default methods, etc.  Not just in 3.0 but over 3.x for the
>> next year.
>>
>> If we don't, then people can choose whether to deploy on 7 or 8 -- but the
>> vast majority will deploy on 8 simply because 7 is no longer supported
>> without a premium contract with Oracle.  8 also has a more advanced G1GC
>> implementation (see CASSANDRA-7486).
>>
>> I think that gaining access to the new features in 8 as we develop 3.x is
>> worth losing the ability to run on a platform that will have been EOL for a
>> couple months by the time we release.
>>
>> --
>> Jonathan Ellis
>> Project Chair, Apache Cassandra
>> co-founder, http://www.datastax.com
>> @spyced
>



-- 
Yuki Morishita
 t:yukim (http://twitter.com/yukim)


Re: Requiring Java 8 for C* 3.0

2015-05-07 Thread Gary Dusbabek
+1

On Thu, May 7, 2015 at 11:09 AM, Jonathan Ellis  wrote:

> We discussed requiring Java 8 previously and decided to remain Java
> 7-compatible, but at the time we were planning to release 3.0 before Java 7
> EOL.  Now that 8099 and increased emphasis on QA have delayed us past Java
> 7 EOL, I think it's worth reopening this discussion.
>
> If we require 8, then we can use lambdas, LongAdder, StampedLock, Streaming
> collections, default methods, etc.  Not just in 3.0 but over 3.x for the
> next year.
>
> If we don't, then people can choose whether to deploy on 7 or 8 -- but the
> vast majority will deploy on 8 simply because 7 is no longer supported
> without a premium contract with Oracle.  8 also has a more advanced G1GC
> implementation (see CASSANDRA-7486).
>
> I think that gaining access to the new features in 8 as we develop 3.x is
> worth losing the ability to run on a platform that will have been EOL for a
> couple months by the time we release.
>
> --
> Jonathan Ellis
> Project Chair, Apache Cassandra
> co-founder, http://www.datastax.com
> @spyced
>


Re: Requiring Java 8 for C* 3.0

2015-05-07 Thread Benedict Elliott Smith
I have no position on this, but I would like to issue a word of caution to
everyone excited to use the new JDK8 features in development to please
discuss their use widely beforehand, and to consider them carefully. Many
of them are not generally useful to us (e.g. LongAdder), and may have
unexpected behaviours (e.g. hidden parallelization in streams).

On Thu, May 7, 2015 at 5:16 PM, Yuki Morishita  wrote:

> +1
>
> On Thu, May 7, 2015 at 11:13 AM, Jeremiah D Jordan
>  wrote:
> > With Java 7 being EOL for free versions I am +1 on this.  If you want to
> stick with 7, you can always keep running 2.1.
> >
> >> On May 7, 2015, at 11:09 AM, Jonathan Ellis  wrote:
> >>
> >> We discussed requiring Java 8 previously and decided to remain Java
> >> 7-compatible, but at the time we were planning to release 3.0 before
> Java 7
> >> EOL.  Now that 8099 and increased emphasis on QA have delayed us past
> Java
> >> 7 EOL, I think it's worth reopening this discussion.
> >>
> >> If we require 8, then we can use lambdas, LongAdder, StampedLock,
> Streaming
> >> collections, default methods, etc.  Not just in 3.0 but over 3.x for the
> >> next year.
> >>
> >> If we don't, then people can choose whether to deploy on 7 or 8 -- but
> the
> >> vast majority will deploy on 8 simply because 7 is no longer supported
> >> without a premium contract with Oracle.  8 also has a more advanced G1GC
> >> implementation (see CASSANDRA-7486).
> >>
> >> I think that gaining access to the new features in 8 as we develop 3.x
> is
> >> worth losing the ability to run on a platform that will have been EOL
> for a
> >> couple months by the time we release.
> >>
> >> --
> >> Jonathan Ellis
> >> Project Chair, Apache Cassandra
> >> co-founder, http://www.datastax.com
> >> @spyced
> >
>
>
>
> --
> Yuki Morishita
>  t:yukim (http://twitter.com/yukim)
>


Re: Requiring Java 8 for C* 3.0

2015-05-07 Thread Aleksey Yeschenko
The switch will necessarily hurt 3.0 adoption, but I think we’ll live. To me, 
the benefits (mostly access to lambdas and default methods, tbh) slightly 
outweigh the downsides.

+0.1

-- 
AY

On May 7, 2015 at 19:22:53, Gary Dusbabek (gdusba...@gmail.com) wrote:

+1  

On Thu, May 7, 2015 at 11:09 AM, Jonathan Ellis  wrote:  

> We discussed requiring Java 8 previously and decided to remain Java  
> 7-compatible, but at the time we were planning to release 3.0 before Java 7  
> EOL. Now that 8099 and increased emphasis on QA have delayed us past Java  
> 7 EOL, I think it's worth reopening this discussion.  
>  
> If we require 8, then we can use lambdas, LongAdder, StampedLock, Streaming  
> collections, default methods, etc. Not just in 3.0 but over 3.x for the  
> next year.  
>  
> If we don't, then people can choose whether to deploy on 7 or 8 -- but the  
> vast majority will deploy on 8 simply because 7 is no longer supported  
> without a premium contract with Oracle. 8 also has a more advanced G1GC  
> implementation (see CASSANDRA-7486).  
>  
> I think that gaining access to the new features in 8 as we develop 3.x is  
> worth losing the ability to run on a platform that will have been EOL for a  
> couple months by the time we release.  
>  
> --  
> Jonathan Ellis  
> Project Chair, Apache Cassandra  
> co-founder, http://www.datastax.com  
> @spyced  
>  


Re: Requiring Java 8 for C* 3.0

2015-05-07 Thread Nick Bailey
Is running 2.1 with java 8 a supported or recommended way to run at this
point? If not then we'll be requiring users to upgrade both java and C* at
the same time when making the jump to 3.0.

On Thu, May 7, 2015 at 11:25 AM, Aleksey Yeschenko 
wrote:

> The switch will necessarily hurt 3.0 adoption, but I think we’ll live. To
> me, the benefits (mostly access to lambdas and default methods, tbh)
> slightly outweigh the downsides.
>
> +0.1
>
> --
> AY
>
> On May 7, 2015 at 19:22:53, Gary Dusbabek (gdusba...@gmail.com) wrote:
>
> +1
>
> On Thu, May 7, 2015 at 11:09 AM, Jonathan Ellis  wrote:
>
> > We discussed requiring Java 8 previously and decided to remain Java
> > 7-compatible, but at the time we were planning to release 3.0 before
> Java 7
> > EOL. Now that 8099 and increased emphasis on QA have delayed us past Java
> > 7 EOL, I think it's worth reopening this discussion.
> >
> > If we require 8, then we can use lambdas, LongAdder, StampedLock,
> Streaming
> > collections, default methods, etc. Not just in 3.0 but over 3.x for the
> > next year.
> >
> > If we don't, then people can choose whether to deploy on 7 or 8 -- but
> the
> > vast majority will deploy on 8 simply because 7 is no longer supported
> > without a premium contract with Oracle. 8 also has a more advanced G1GC
> > implementation (see CASSANDRA-7486).
> >
> > I think that gaining access to the new features in 8 as we develop 3.x is
> > worth losing the ability to run on a platform that will have been EOL
> for a
> > couple months by the time we release.
> >
> > --
> > Jonathan Ellis
> > Project Chair, Apache Cassandra
> > co-founder, http://www.datastax.com
> > @spyced
> >
>


Re: Requiring Java 8 for C* 3.0

2015-05-07 Thread Jeremy Hanna
There’s no reason why people can’t run java 8 with 2.1.  IIRC the only issue 
we’d had with it was Dave’s 
https://issues.apache.org/jira/browse/CASSANDRA-7028.  That’s probably the best 
thing for people to do though - run java 8 with 2.1 so the jump to 3.0 isn’t as 
significant.  Good point.

> On May 7, 2015, at 11:43 AM, Nick Bailey  wrote:
> 
> Is running 2.1 with java 8 a supported or recommended way to run at this
> point? If not then we'll be requiring users to upgrade both java and C* at
> the same time when making the jump to 3.0.
> 
> On Thu, May 7, 2015 at 11:25 AM, Aleksey Yeschenko 
> wrote:
> 
>> The switch will necessarily hurt 3.0 adoption, but I think we’ll live. To
>> me, the benefits (mostly access to lambdas and default methods, tbh)
>> slightly outweigh the downsides.
>> 
>> +0.1
>> 
>> --
>> AY
>> 
>> On May 7, 2015 at 19:22:53, Gary Dusbabek (gdusba...@gmail.com) wrote:
>> 
>> +1
>> 
>> On Thu, May 7, 2015 at 11:09 AM, Jonathan Ellis  wrote:
>> 
>>> We discussed requiring Java 8 previously and decided to remain Java
>>> 7-compatible, but at the time we were planning to release 3.0 before
>> Java 7
>>> EOL. Now that 8099 and increased emphasis on QA have delayed us past Java
>>> 7 EOL, I think it's worth reopening this discussion.
>>> 
>>> If we require 8, then we can use lambdas, LongAdder, StampedLock,
>> Streaming
>>> collections, default methods, etc. Not just in 3.0 but over 3.x for the
>>> next year.
>>> 
>>> If we don't, then people can choose whether to deploy on 7 or 8 -- but
>> the
>>> vast majority will deploy on 8 simply because 7 is no longer supported
>>> without a premium contract with Oracle. 8 also has a more advanced G1GC
>>> implementation (see CASSANDRA-7486).
>>> 
>>> I think that gaining access to the new features in 8 as we develop 3.x is
>>> worth losing the ability to run on a platform that will have been EOL
>> for a
>>> couple months by the time we release.
>>> 
>>> --
>>> Jonathan Ellis
>>> Project Chair, Apache Cassandra
>>> co-founder, http://www.datastax.com
>>> @spyced
>>> 
>> 



Re: Requiring Java 8 for C* 3.0

2015-05-07 Thread Jonathan Ellis
Yes, it is.

On Thu, May 7, 2015 at 9:43 AM, Nick Bailey  wrote:

> Is running 2.1 with java 8 a supported or recommended way to run at this
> point? If not then we'll be requiring users to upgrade both java and C* at
> the same time when making the jump to 3.0.
>
> On Thu, May 7, 2015 at 11:25 AM, Aleksey Yeschenko 
> wrote:
>
> > The switch will necessarily hurt 3.0 adoption, but I think we’ll live. To
> > me, the benefits (mostly access to lambdas and default methods, tbh)
> > slightly outweigh the downsides.
> >
> > +0.1
> >
> > --
> > AY
> >
> > On May 7, 2015 at 19:22:53, Gary Dusbabek (gdusba...@gmail.com) wrote:
> >
> > +1
> >
> > On Thu, May 7, 2015 at 11:09 AM, Jonathan Ellis 
> wrote:
> >
> > > We discussed requiring Java 8 previously and decided to remain Java
> > > 7-compatible, but at the time we were planning to release 3.0 before
> > Java 7
> > > EOL. Now that 8099 and increased emphasis on QA have delayed us past
> Java
> > > 7 EOL, I think it's worth reopening this discussion.
> > >
> > > If we require 8, then we can use lambdas, LongAdder, StampedLock,
> > Streaming
> > > collections, default methods, etc. Not just in 3.0 but over 3.x for the
> > > next year.
> > >
> > > If we don't, then people can choose whether to deploy on 7 or 8 -- but
> > the
> > > vast majority will deploy on 8 simply because 7 is no longer supported
> > > without a premium contract with Oracle. 8 also has a more advanced G1GC
> > > implementation (see CASSANDRA-7486).
> > >
> > > I think that gaining access to the new features in 8 as we develop 3.x
> is
> > > worth losing the ability to run on a platform that will have been EOL
> > for a
> > > couple months by the time we release.
> > >
> > > --
> > > Jonathan Ellis
> > > Project Chair, Apache Cassandra
> > > co-founder, http://www.datastax.com
> > > @spyced
> > >
> >
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder, http://www.datastax.com
@spyced


Re: Staging Branches

2015-05-07 Thread Aleksey Yeschenko
> Agreed. I would like to wait and see how we do without extra branches for 
> a release or two. That will give us a better idea of how much pain the 
> extra steps will protect us from.

In the meantime can we agree on having cassci to validate personal merged 
branches before pushing either, in case of non-trunk patches?

That doesn’t require any more infra setup, and with the delta between 2.0 and 
2.1, and 2.1 and 3.0, and, sometimes, 2.0 and 3.0, doing so is crucial.

I know that at least Sam does that already, but I want it to be an agreed upon 
procedure.

Oh, ans as Benedict said, it’d be nice for all those commits to start including 
CHANGES.txt and the commit messages, both. With the developers total/committers 
ratio that we have now, it helps the process scale better.

-- 
AY

On May 7, 2015 at 19:02:36, Benedict Elliott Smith (belliottsm...@datastax.com) 
wrote:

>  
> I would argue that we must *at least* do the following for now.  


If we get this right, the extra staging branches can certainly wait to be  
assessed until later.  

IMO, any patch should have a branch in CI for each affected mainline  
branch, and should have the commit completely wired up (CHANGES.txt, commit  
message, the works), so that it can be merged straight in. If it conflicts  
significantly, it can be bumped back to the author/reviewer to refresh.  

On Thu, May 7, 2015 at 4:16 PM, Aleksey Yeschenko   
wrote:  

> I would argue that we must *at least* do the following for now.  
>  
> If your patch is 2.1-based, you need to create a private git branch for  
> that and a merged trunk branch ( and -trunk). And you don’t push  
> anything until cassci validates all of those three branches, first.  
>  
> An issue without a link to cassci for both of those branches passing  
> doesn’t qualify as done to me.  
>  
> That alone will be enough to catch most merge-related regressions.  
>  
> Going with staging branches would also prevent any issues from concurrent  
> pushes, but given the opposition, I’m fine with dropping that requirement,  
> for now.  
>  
> --  
> AY  
>  
> On May 7, 2015 at 18:04:20, Josh McKenzie (josh.mcken...@datastax.com)  
> wrote:  
>  
> >  
> > Merging is *hard*. Especially 2.1 -> 3.0, with many breaking API changes  
> > (this is before 8099, which is going to make a *world* of hurt, and will  
> > stick around for a year). It is *very* easy to break things, with even  
> the  
> > utmost care.  
>  
>  
> While I agree re:merging, I'm not convinced the proportion of commits that  
> will benefit from a staging branch testing pipeline is high enough to  
> justify the time and complexity overhead to (what I expect are) the vast  
> majority of commits that are smaller, incremental changes that won't  
> benefit from this.  
>  
> On Thu, May 7, 2015 at 9:56 AM, Ariel Weisberg <  
> ariel.weisb...@datastax.com>  
> wrote:  
>  
> > Hi,  
> >  
> > Sorry didn't mean to blame or come off snarky. I just it is important not  
> > to #include our release process from somewhere else. We don't have to do  
> > anything unless it is necessary to meet some requirement of what we are  
> > trying to do.  
> >  
> > So the phrase "Trunk is always releasable" definitely has some wiggle  
> room  
> > because you have to define what your release process is.  
> >  
> > If your requirement is that at any time you be able to tag trunk and ship  
> > it within minutes then yes staging branches help solve that problem.  
> >  
> > The reality is that the release process always takes low single digit  
> days  
> > because you branch trunk, then wait for longer running automated tests to  
> > run against that branch. If there happens to be a failure you may have to  
> > update the branch, but you have bounded how much brokeness sits between  
> you  
> > and release already. We also don't have a requirement to be able to ship  
> > nigh immediately.  
> >  
> > We can balance the cost of extra steps and process against the cost of  
> > having to delay some releases some of the time by a few days and pick  
> > whichever is more important. We are stilly reducing the amount of time it  
> > takes to get a working release. Reduced enough that we should be able to  
> > ship every month without difficulty. I have been on a team roughly our  
> size  
> > that shipped every three weeks without having staging branches. Trunk  
> broke  
> > infrequently enough it wasn't an issue and when it did break it wasn't  
> hard  
> > to address. The real pain point was flapping tests and the diffusion of  
> > responsibility that prevented them from getting fixed.  
> >  
> > If I were trying to sell staging branches I would work the angle that I  
> > want to be able to bisect trunk without coming across broken revisions.  
> > Then balance the value of that with the cost of the process.  
> >  
> > Ariel  
> >  
> > On Thu, May 7, 2015 at 10:41 AM, Benedict Elliott Smith <  
> > belliottsm...@datastax.com> wr

Re: cqlsh client side filtering

2015-05-07 Thread Tyler Hobbs
On Thu, May 7, 2015 at 10:42 AM, Jens Rantil  wrote:

>
> Are there any plans (or JIRA issue) for adding client-side filtering to
> cqlsh? It would hugely improve our experiences with it when debugging etc.
> I wouldn't be against adding some kind of auto LIMIT or warning when using
> it as I understand users could use it as an anti-pattern, too.


There are general plans to increase the types of filtering that Cassandra
can do server-side, but CASSANDRA-8099 is necessary for a lot of that work.

We prefer not to support things in cqlsh that can't be done through normal
cql queries (outside of basic admin-type operations).  What sort of API are
you envisioning?


-- 
Tyler Hobbs
DataStax 


Re: Staging Branches

2015-05-07 Thread Ryan McGuire
> In the meantime can we agree on having cassci to validate personal merged
branches before pushing either, in case of non-trunk patches?

+100 from me, but why the exception for trunk? Wouldn't it be easier to
wait for the dev branch tests to pass and then do all the merging at once
(2.0, 2.1,3.0, trunk)?

On Thu, May 7, 2015 at 1:21 PM, Aleksey Yeschenko 
wrote:

> > Agreed. I would like to wait and see how we do without extra branches
> for
> > a release or two. That will give us a better idea of how much pain the
> > extra steps will protect us from.
>
> In the meantime can we agree on having cassci to validate personal merged
> branches before pushing either, in case of non-trunk patches?
>
> That doesn’t require any more infra setup, and with the delta between 2.0
> and 2.1, and 2.1 and 3.0, and, sometimes, 2.0 and 3.0, doing so is crucial.
>
> I know that at least Sam does that already, but I want it to be an agreed
> upon procedure.
>
> Oh, ans as Benedict said, it’d be nice for all those commits to start
> including CHANGES.txt and the commit messages, both. With the developers
> total/committers ratio that we have now, it helps the process scale better.
>
> --
> AY
>
> On May 7, 2015 at 19:02:36, Benedict Elliott Smith (
> belliottsm...@datastax.com) wrote:
>
> >
> > I would argue that we must *at least* do the following for now.
>
>
> If we get this right, the extra staging branches can certainly wait to be
> assessed until later.
>
> IMO, any patch should have a branch in CI for each affected mainline
> branch, and should have the commit completely wired up (CHANGES.txt, commit
> message, the works), so that it can be merged straight in. If it conflicts
> significantly, it can be bumped back to the author/reviewer to refresh.
>
> On Thu, May 7, 2015 at 4:16 PM, Aleksey Yeschenko 
> wrote:
>
> > I would argue that we must *at least* do the following for now.
> >
> > If your patch is 2.1-based, you need to create a private git branch for
> > that and a merged trunk branch ( and -trunk). And you don’t push
> > anything until cassci validates all of those three branches, first.
> >
> > An issue without a link to cassci for both of those branches passing
> > doesn’t qualify as done to me.
> >
> > That alone will be enough to catch most merge-related regressions.
> >
> > Going with staging branches would also prevent any issues from concurrent
> > pushes, but given the opposition, I’m fine with dropping that
> requirement,
> > for now.
> >
> > --
> > AY
> >
> > On May 7, 2015 at 18:04:20, Josh McKenzie (josh.mcken...@datastax.com)
> > wrote:
> >
> > >
> > > Merging is *hard*. Especially 2.1 -> 3.0, with many breaking API
> changes
> > > (this is before 8099, which is going to make a *world* of hurt, and
> will
> > > stick around for a year). It is *very* easy to break things, with even
> > the
> > > utmost care.
> >
> >
> > While I agree re:merging, I'm not convinced the proportion of commits
> that
> > will benefit from a staging branch testing pipeline is high enough to
> > justify the time and complexity overhead to (what I expect are) the vast
> > majority of commits that are smaller, incremental changes that won't
> > benefit from this.
> >
> > On Thu, May 7, 2015 at 9:56 AM, Ariel Weisberg <
> > ariel.weisb...@datastax.com>
> > wrote:
> >
> > > Hi,
> > >
> > > Sorry didn't mean to blame or come off snarky. I just it is important
> not
> > > to #include our release process from somewhere else. We don't have to
> do
> > > anything unless it is necessary to meet some requirement of what we are
> > > trying to do.
> > >
> > > So the phrase "Trunk is always releasable" definitely has some wiggle
> > room
> > > because you have to define what your release process is.
> > >
> > > If your requirement is that at any time you be able to tag trunk and
> ship
> > > it within minutes then yes staging branches help solve that problem.
> > >
> > > The reality is that the release process always takes low single digit
> > days
> > > because you branch trunk, then wait for longer running automated tests
> to
> > > run against that branch. If there happens to be a failure you may have
> to
> > > update the branch, but you have bounded how much brokeness sits between
> > you
> > > and release already. We also don't have a requirement to be able to
> ship
> > > nigh immediately.
> > >
> > > We can balance the cost of extra steps and process against the cost of
> > > having to delay some releases some of the time by a few days and pick
> > > whichever is more important. We are stilly reducing the amount of time
> it
> > > takes to get a working release. Reduced enough that we should be able
> to
> > > ship every month without difficulty. I have been on a team roughly our
> > size
> > > that shipped every three weeks without having staging branches. Trunk
> > broke
> > > infrequently enough it wasn't an issue and when it did break it wasn't
> > hard
> > > to address. The real pain point was flapping tests and

Re: Staging Branches

2015-05-07 Thread Aleksey Yeschenko
> +100 from me, but why the exception for trunk?

Sorry, just my poor wording again. No exception for trunk. What I meant by 
‘non-trunk’ patches was patches that originated in 2.0 or 2.1 branches. ‘trunk’ 
patches (today) would be 3.0-only features and fixes, and these need no 
upstream merging, b/c trunk is already as upstream as it gets.

Of course you’d still have cassci vet the trunk branch itself - we already 
should be doing that.

-- 
AY

On May 7, 2015 at 20:30:07, Ryan McGuire (r...@datastax.com) wrote:

> In the meantime can we agree on having cassci to validate personal merged  
branches before pushing either, in case of non-trunk patches?  

+100 from me, but why the exception for trunk? Wouldn't it be easier to  
wait for the dev branch tests to pass and then do all the merging at once  
(2.0, 2.1,3.0, trunk)?  

On Thu, May 7, 2015 at 1:21 PM, Aleksey Yeschenko   
wrote:  

> > Agreed. I would like to wait and see how we do without extra branches  
> for  
> > a release or two. That will give us a better idea of how much pain the  
> > extra steps will protect us from.  
>  
> In the meantime can we agree on having cassci to validate personal merged  
> branches before pushing either, in case of non-trunk patches?  
>  
> That doesn’t require any more infra setup, and with the delta between 2.0  
> and 2.1, and 2.1 and 3.0, and, sometimes, 2.0 and 3.0, doing so is crucial.  
>  
> I know that at least Sam does that already, but I want it to be an agreed  
> upon procedure.  
>  
> Oh, ans as Benedict said, it’d be nice for all those commits to start  
> including CHANGES.txt and the commit messages, both. With the developers  
> total/committers ratio that we have now, it helps the process scale better.  
>  
> --  
> AY  
>  
> On May 7, 2015 at 19:02:36, Benedict Elliott Smith (  
> belliottsm...@datastax.com) wrote:  
>  
> >  
> > I would argue that we must *at least* do the following for now.  
>  
>  
> If we get this right, the extra staging branches can certainly wait to be  
> assessed until later.  
>  
> IMO, any patch should have a branch in CI for each affected mainline  
> branch, and should have the commit completely wired up (CHANGES.txt, commit  
> message, the works), so that it can be merged straight in. If it conflicts  
> significantly, it can be bumped back to the author/reviewer to refresh.  
>  
> On Thu, May 7, 2015 at 4:16 PM, Aleksey Yeschenko   
> wrote:  
>  
> > I would argue that we must *at least* do the following for now.  
> >  
> > If your patch is 2.1-based, you need to create a private git branch for  
> > that and a merged trunk branch ( and -trunk). And you don’t push  
> > anything until cassci validates all of those three branches, first.  
> >  
> > An issue without a link to cassci for both of those branches passing  
> > doesn’t qualify as done to me.  
> >  
> > That alone will be enough to catch most merge-related regressions.  
> >  
> > Going with staging branches would also prevent any issues from concurrent  
> > pushes, but given the opposition, I’m fine with dropping that  
> requirement,  
> > for now.  
> >  
> > --  
> > AY  
> >  
> > On May 7, 2015 at 18:04:20, Josh McKenzie (josh.mcken...@datastax.com)  
> > wrote:  
> >  
> > >  
> > > Merging is *hard*. Especially 2.1 -> 3.0, with many breaking API  
> changes  
> > > (this is before 8099, which is going to make a *world* of hurt, and  
> will  
> > > stick around for a year). It is *very* easy to break things, with even  
> > the  
> > > utmost care.  
> >  
> >  
> > While I agree re:merging, I'm not convinced the proportion of commits  
> that  
> > will benefit from a staging branch testing pipeline is high enough to  
> > justify the time and complexity overhead to (what I expect are) the vast  
> > majority of commits that are smaller, incremental changes that won't  
> > benefit from this.  
> >  
> > On Thu, May 7, 2015 at 9:56 AM, Ariel Weisberg <  
> > ariel.weisb...@datastax.com>  
> > wrote:  
> >  
> > > Hi,  
> > >  
> > > Sorry didn't mean to blame or come off snarky. I just it is important  
> not  
> > > to #include our release process from somewhere else. We don't have to  
> do  
> > > anything unless it is necessary to meet some requirement of what we are  
> > > trying to do.  
> > >  
> > > So the phrase "Trunk is always releasable" definitely has some wiggle  
> > room  
> > > because you have to define what your release process is.  
> > >  
> > > If your requirement is that at any time you be able to tag trunk and  
> ship  
> > > it within minutes then yes staging branches help solve that problem.  
> > >  
> > > The reality is that the release process always takes low single digit  
> > days  
> > > because you branch trunk, then wait for longer running automated tests  
> to  
> > > run against that branch. If there happens to be a failure you may have  
> to  
> > > update the branch, but you have bounded how much brokeness sits between  
> > you

Re: Staging Branches

2015-05-07 Thread Ariel Weisberg
Hi,

I agree with release (or even collaboration branches) having the same
approach for only merging code that has run in cassci off of that branch.

Ariel

On Thu, May 7, 2015 at 1:34 PM, Aleksey Yeschenko 
wrote:

> > +100 from me, but why the exception for trunk?
>
> Sorry, just my poor wording again. No exception for trunk. What I meant
> by ‘non-trunk’ patches was patches that originated in 2.0 or 2.1
> branches. ‘trunk’ patches (today) would be 3.0-only features and fixes, and
> these need no upstream merging, b/c trunk is already as upstream as it gets.
>
> Of course you’d still have cassci vet the trunk branch itself - we already
> should be doing that.
>
> --
> AY
>
> On May 7, 2015 at 20:30:07, Ryan McGuire (r...@datastax.com) wrote:
>
> > In the meantime can we agree on having cassci to validate personal merged
> branches before pushing either, in case of non-trunk patches?
>
> +100 from me, but why the exception for trunk? Wouldn't it be easier to
> wait for the dev branch tests to pass and then do all the merging at once
> (2.0, 2.1,3.0, trunk)?
>
> On Thu, May 7, 2015 at 1:21 PM, Aleksey Yeschenko 
> wrote:
>
> > > Agreed. I would like to wait and see how we do without extra branches
> > for
> > > a release or two. That will give us a better idea of how much pain the
> > > extra steps will protect us from.
> >
> > In the meantime can we agree on having cassci to validate personal merged
> > branches before pushing either, in case of non-trunk patches?
> >
> > That doesn’t require any more infra setup, and with the delta between 2.0
> > and 2.1, and 2.1 and 3.0, and, sometimes, 2.0 and 3.0, doing so is
> crucial.
> >
> > I know that at least Sam does that already, but I want it to be an agreed
> > upon procedure.
> >
> > Oh, ans as Benedict said, it’d be nice for all those commits to start
> > including CHANGES.txt and the commit messages, both. With the developers
> > total/committers ratio that we have now, it helps the process scale
> better.
> >
> > --
> > AY
> >
> > On May 7, 2015 at 19:02:36, Benedict Elliott Smith (
> > belliottsm...@datastax.com) wrote:
> >
> > >
> > > I would argue that we must *at least* do the following for now.
> >
> >
> > If we get this right, the extra staging branches can certainly wait to be
> > assessed until later.
> >
> > IMO, any patch should have a branch in CI for each affected mainline
> > branch, and should have the commit completely wired up (CHANGES.txt,
> commit
> > message, the works), so that it can be merged straight in. If it
> conflicts
> > significantly, it can be bumped back to the author/reviewer to refresh.
> >
> > On Thu, May 7, 2015 at 4:16 PM, Aleksey Yeschenko 
> > wrote:
> >
> > > I would argue that we must *at least* do the following for now.
> > >
> > > If your patch is 2.1-based, you need to create a private git branch for
> > > that and a merged trunk branch ( and -trunk). And you don’t
> push
> > > anything until cassci validates all of those three branches, first.
> > >
> > > An issue without a link to cassci for both of those branches passing
> > > doesn’t qualify as done to me.
> > >
> > > That alone will be enough to catch most merge-related regressions.
> > >
> > > Going with staging branches would also prevent any issues from
> concurrent
> > > pushes, but given the opposition, I’m fine with dropping that
> > requirement,
> > > for now.
> > >
> > > --
> > > AY
> > >
> > > On May 7, 2015 at 18:04:20, Josh McKenzie (josh.mcken...@datastax.com)
> > > wrote:
> > >
> > > >
> > > > Merging is *hard*. Especially 2.1 -> 3.0, with many breaking API
> > changes
> > > > (this is before 8099, which is going to make a *world* of hurt, and
> > will
> > > > stick around for a year). It is *very* easy to break things, with
> even
> > > the
> > > > utmost care.
> > >
> > >
> > > While I agree re:merging, I'm not convinced the proportion of commits
> > that
> > > will benefit from a staging branch testing pipeline is high enough to
> > > justify the time and complexity overhead to (what I expect are) the
> vast
> > > majority of commits that are smaller, incremental changes that won't
> > > benefit from this.
> > >
> > > On Thu, May 7, 2015 at 9:56 AM, Ariel Weisberg <
> > > ariel.weisb...@datastax.com>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > Sorry didn't mean to blame or come off snarky. I just it is important
> > not
> > > > to #include our release process from somewhere else. We don't have to
> > do
> > > > anything unless it is necessary to meet some requirement of what we
> are
> > > > trying to do.
> > > >
> > > > So the phrase "Trunk is always releasable" definitely has some wiggle
> > > room
> > > > because you have to define what your release process is.
> > > >
> > > > If your requirement is that at any time you be able to tag trunk and
> > ship
> > > > it within minutes then yes staging branches help solve that problem.
> > > >
> > > > The reality is that the release process always takes low single digit
> > > days
> 

Re: Requiring Java 8 for C* 3.0

2015-05-07 Thread Pierre-Yves Ritschard
What would be the recommended JDK, is Hotspot still the way to go or do
JDK8 users already consider OpenJDK production-grade now ?

On 05/07/2015 07:00 PM, Jonathan Ellis wrote:
> Yes, it is.
>