Re: [DISCUSS] CEP-7 Storage Attached Index

2020-08-18 Thread Mick Semb Wever
>
> We are looking forward to the community's feedback and suggestions.
>


What comes immediately to mind is testing requirements. It has been
mentioned already that the project's testability and QA guidelines are
inadequate to successfully introduce new features and refactorings to the
codebase. During the 4.0 beta phase this was intended to be addressed, i.e.
defining more specific QA guidelines for 4.0-rc. This would be an important
step towards QA guidelines for all changes and CEPs post-4.0.

Questions from me
 - How will this be tested, how will its QA status and lifecycle be
defined? (per above)
 - With existing C* code needing to be changed, what is the proposed plan
for making those changes ensuring maintained QA, e.g. is there separate QA
cycles planned for altering the SPI before adding a new SPI implementation?
 - Despite being out of scope, it would be nice to have some idea from the
CEP author of when users might still choose afresh 2i or SASI over SAI,
 - Who fills the roles involved? Who are the contributors in this DataStax
team? Who is the shepherd? Are there other stakeholders willing to be
involved?
 - Is there a preference to use gdoc instead of the project's wiki, and
why? (the CEP process suggest a wiki page, and feedback on why another
approach is considered better helps evolve the CEP process itself)

cheers,
Mick


Re: [DISCUSS] CEP-7 Storage Attached Index

2020-08-18 Thread Jasonstack Zhao Yang
Mick thanks for your questions.

> During the 4.0 beta phase this was intended to be addressed, i.e.>
defining more specific QA guidelines for 4.0-rc. This would be an important
> step towards QA guidelines for all changes and CEPs post-4.0.

Agreed, I think CASSANDRA-15536
 (4.0 Quality:
Components and Test Plans) has set a good example of QA/Testing.

>  - How will this be tested, how will its QA status and lifecycle be>
defined? (per above)

SAI will follow the same QA/Testing guideline as in CASSANDRA-15536.

>  - With existing C* code needing to be changed, what is the proposed
plan> for making those changes ensuring maintained QA, e.g. is there
separate QA
> cycles planned for altering the SPI before adding a new SPI
implementation?

The plan is to have interface changes and their new implementations to be
reviewed/tested/merged at once to reduce overhead.

But if having interface changes reviewed/tested/merged separately helps
quality, I don't think anyone will object.

> - Despite being out of scope, it would be nice to have some idea from
the>  CEP author of when users might still choose afresh 2i or SASI over SAI

I'd like SAI to be the only index for users, but this is a decision to be
made by the community.

> - Who fills the roles involved?

Contributors that are still active on C* or related projects:

Andres de la Peña
Caleb Rackliffe
Dan LaRocque
Jason Rutherglen
Mike Adamson
Rocco Varela
Zhao Yang

I will shepherd.

Anyone that is interested in C* index, feel free to join us at slack
#cassandra-sai.

> - Is there a preference to use gdoc instead of the project's wiki, and>
why? (the CEP process suggest a wiki page, and feedback on why another
> approach is considered better helps evolve the CEP process itself)

Didn't notice wiki is required. Will port CEP to wiki.


On Tue, 18 Aug 2020 at 17:39, Mick Semb Wever  wrote:

> >
> > We are looking forward to the community's feedback and suggestions.
> >
>
>
> What comes immediately to mind is testing requirements. It has been
> mentioned already that the project's testability and QA guidelines are
> inadequate to successfully introduce new features and refactorings to the
> codebase. During the 4.0 beta phase this was intended to be addressed, i.e.
> defining more specific QA guidelines for 4.0-rc. This would be an important
> step towards QA guidelines for all changes and CEPs post-4.0.
>
> Questions from me
>  - How will this be tested, how will its QA status and lifecycle be
> defined? (per above)
>  - With existing C* code needing to be changed, what is the proposed plan
> for making those changes ensuring maintained QA, e.g. is there separate QA
> cycles planned for altering the SPI before adding a new SPI implementation?
>  - Despite being out of scope, it would be nice to have some idea from the
> CEP author of when users might still choose afresh 2i or SASI over SAI,
>  - Who fills the roles involved? Who are the contributors in this DataStax
> team? Who is the shepherd? Are there other stakeholders willing to be
> involved?
>  - Is there a preference to use gdoc instead of the project's wiki, and
> why? (the CEP process suggest a wiki page, and feedback on why another
> approach is considered better helps evolve the CEP process itself)
>
> cheers,
> Mick
>


Re: [DISCUSS] CEP-7 Storage Attached Index

2020-08-18 Thread DuyHai Doan
Thank you Zhao Yang for starting this topic

After reading the short design doc, I have a few questions

1) SASI was pretty inefficient indexing wide partitions because the index
structure only retains the partition token, not the clustering colums. As
per design doc SAI has row id mapping to partition offset, can we hope that
indexing wide partition will be more efficient with SAI ? One detail that
worries me is that in the beggining of the design doc, it is said that the
matching rows are post filtered while scanning the partition. Can you
confirm or infirm that SAI is efficient with wide partitions and provides
the partition offsets to the matching rows ?

2) About space efficiency, one of the biggest drawback of SASI was the huge
space required for index structure when using CONTAINS logic because of the
decomposition of text columns into n-grams. Will SAI suffer from the same
issue in future iterations ? I'm anticipating a bit

3) If I'm querying using SAI and providing complete partition key, will it
be more efficient than querying without partition key. In other words, does
SAI provide any optimisation when partition key is specified ?

Regards

Duy Hai DOAN

Le mar. 18 août 2020 à 11:39, Mick Semb Wever  a écrit :

> >
> > We are looking forward to the community's feedback and suggestions.
> >
>
>
> What comes immediately to mind is testing requirements. It has been
> mentioned already that the project's testability and QA guidelines are
> inadequate to successfully introduce new features and refactorings to the
> codebase. During the 4.0 beta phase this was intended to be addressed, i.e.
> defining more specific QA guidelines for 4.0-rc. This would be an important
> step towards QA guidelines for all changes and CEPs post-4.0.
>
> Questions from me
>  - How will this be tested, how will its QA status and lifecycle be
> defined? (per above)
>  - With existing C* code needing to be changed, what is the proposed plan
> for making those changes ensuring maintained QA, e.g. is there separate QA
> cycles planned for altering the SPI before adding a new SPI implementation?
>  - Despite being out of scope, it would be nice to have some idea from the
> CEP author of when users might still choose afresh 2i or SASI over SAI,
>  - Who fills the roles involved? Who are the contributors in this DataStax
> team? Who is the shepherd? Are there other stakeholders willing to be
> involved?
>  - Is there a preference to use gdoc instead of the project's wiki, and
> why? (the CEP process suggest a wiki page, and feedback on why another
> approach is considered better helps evolve the CEP process itself)
>
> cheers,
> Mick
>


Re: [DISCUSS] CEP-7 Storage Attached Index

2020-08-18 Thread Benedict Elliott Smith
> SAI will follow the same QA/Testing guideline as in CASSANDRA-15536.

CASSANDRA-15536 might set some good examples for retrospectively shoring up our 
quality assurance, but offers no prescriptions for how we approach the testing 
of new work.  I think the project needs to conclude the discussions that keep 
being started around the "definition of done" before determining what 
sufficient quality assurance looks like for this feature.

I've briefly set out some of my views in an earlier email chain that was 
initiated by Josh, that unfortunately received no response.  The project is 
generally very busy right now as we approach 4.0 release, which is partially I 
assume why there has been no movement.  Assuming no further activity from 
others, as we get closer to 4.0 (and I have more time) I will try to produce a 
more formal proposal for quality assurance for the project, to be debated and 
agreed.



On 18/08/2020, 12:02, "Jasonstack Zhao Yang"  wrote:

Mick thanks for your questions.

> During the 4.0 beta phase this was intended to be addressed, i.e.>
defining more specific QA guidelines for 4.0-rc. This would be an important
> step towards QA guidelines for all changes and CEPs post-4.0.

Agreed, I think CASSANDRA-15536
 (4.0 Quality:
Components and Test Plans) has set a good example of QA/Testing.

>  - How will this be tested, how will its QA status and lifecycle be>
defined? (per above)

SAI will follow the same QA/Testing guideline as in CASSANDRA-15536.

>  - With existing C* code needing to be changed, what is the proposed
plan> for making those changes ensuring maintained QA, e.g. is there
separate QA
> cycles planned for altering the SPI before adding a new SPI
implementation?

The plan is to have interface changes and their new implementations to be
reviewed/tested/merged at once to reduce overhead.

But if having interface changes reviewed/tested/merged separately helps
quality, I don't think anyone will object.

> - Despite being out of scope, it would be nice to have some idea from
the>  CEP author of when users might still choose afresh 2i or SASI over SAI

I'd like SAI to be the only index for users, but this is a decision to be
made by the community.

> - Who fills the roles involved?

Contributors that are still active on C* or related projects:

Andres de la Peña
Caleb Rackliffe
Dan LaRocque
Jason Rutherglen
Mike Adamson
Rocco Varela
Zhao Yang

I will shepherd.

Anyone that is interested in C* index, feel free to join us at slack
#cassandra-sai.

> - Is there a preference to use gdoc instead of the project's wiki, and>
why? (the CEP process suggest a wiki page, and feedback on why another
> approach is considered better helps evolve the CEP process itself)

Didn't notice wiki is required. Will port CEP to wiki.


On Tue, 18 Aug 2020 at 17:39, Mick Semb Wever  wrote:

> >
> > We are looking forward to the community's feedback and suggestions.
> >
>
>
> What comes immediately to mind is testing requirements. It has been
> mentioned already that the project's testability and QA guidelines are
> inadequate to successfully introduce new features and refactorings to the
> codebase. During the 4.0 beta phase this was intended to be addressed, 
i.e.
> defining more specific QA guidelines for 4.0-rc. This would be an 
important
> step towards QA guidelines for all changes and CEPs post-4.0.
>
> Questions from me
>  - How will this be tested, how will its QA status and lifecycle be
> defined? (per above)
>  - With existing C* code needing to be changed, what is the proposed plan
> for making those changes ensuring maintained QA, e.g. is there separate QA
> cycles planned for altering the SPI before adding a new SPI 
implementation?
>  - Despite being out of scope, it would be nice to have some idea from the
> CEP author of when users might still choose afresh 2i or SASI over SAI,
>  - Who fills the roles involved? Who are the contributors in this DataStax
> team? Who is the shepherd? Are there other stakeholders willing to be
> involved?
>  - Is there a preference to use gdoc instead of the project's wiki, and
> why? (the CEP process suggest a wiki page, and feedback on why another
> approach is considered better helps evolve the CEP process itself)
>
> cheers,
> Mick
>



-
To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
For additional commands, e-mail: dev-h...@cassandra.apache.org



Re: [DISCUSS] CEP-7 Storage Attached Index

2020-08-18 Thread DuyHai Doan
Last but not least

4) Are collections, static columns, composite partition key composent and
UDT indexings (at any depth) on the roadmap of SAI ? I strongly believe
that those features are the bare minimum to make SAI an interesting
replacement for the native 2nd index as well as SASI. SASI limited support
for those advanced data structures has hindered its wide adoption (among
other issues and bugs)

Regards

Duy Hai DOAN

Le mar. 18 août 2020 à 13:02, Jasonstack Zhao Yang <
jasonstack.z...@gmail.com> a écrit :

> Mick thanks for your questions.
>
> > During the 4.0 beta phase this was intended to be addressed, i.e.>
> defining more specific QA guidelines for 4.0-rc. This would be an important
> > step towards QA guidelines for all changes and CEPs post-4.0.
>
> Agreed, I think CASSANDRA-15536
>  (4.0 Quality:
> Components and Test Plans) has set a good example of QA/Testing.
>
> >  - How will this be tested, how will its QA status and lifecycle be>
> defined? (per above)
>
> SAI will follow the same QA/Testing guideline as in CASSANDRA-15536.
>
> >  - With existing C* code needing to be changed, what is the proposed
> plan> for making those changes ensuring maintained QA, e.g. is there
> separate QA
> > cycles planned for altering the SPI before adding a new SPI
> implementation?
>
> The plan is to have interface changes and their new implementations to be
> reviewed/tested/merged at once to reduce overhead.
>
> But if having interface changes reviewed/tested/merged separately helps
> quality, I don't think anyone will object.
>
> > - Despite being out of scope, it would be nice to have some idea from
> the>  CEP author of when users might still choose afresh 2i or SASI over
> SAI
>
> I'd like SAI to be the only index for users, but this is a decision to be
> made by the community.
>
> > - Who fills the roles involved?
>
> Contributors that are still active on C* or related projects:
>
> Andres de la Peña
> Caleb Rackliffe
> Dan LaRocque
> Jason Rutherglen
> Mike Adamson
> Rocco Varela
> Zhao Yang
>
> I will shepherd.
>
> Anyone that is interested in C* index, feel free to join us at slack
> #cassandra-sai.
>
> > - Is there a preference to use gdoc instead of the project's wiki, and>
> why? (the CEP process suggest a wiki page, and feedback on why another
> > approach is considered better helps evolve the CEP process itself)
>
> Didn't notice wiki is required. Will port CEP to wiki.
>
>
> On Tue, 18 Aug 2020 at 17:39, Mick Semb Wever  wrote:
>
> > >
> > > We are looking forward to the community's feedback and suggestions.
> > >
> >
> >
> > What comes immediately to mind is testing requirements. It has been
> > mentioned already that the project's testability and QA guidelines are
> > inadequate to successfully introduce new features and refactorings to the
> > codebase. During the 4.0 beta phase this was intended to be addressed,
> i.e.
> > defining more specific QA guidelines for 4.0-rc. This would be an
> important
> > step towards QA guidelines for all changes and CEPs post-4.0.
> >
> > Questions from me
> >  - How will this be tested, how will its QA status and lifecycle be
> > defined? (per above)
> >  - With existing C* code needing to be changed, what is the proposed plan
> > for making those changes ensuring maintained QA, e.g. is there separate
> QA
> > cycles planned for altering the SPI before adding a new SPI
> implementation?
> >  - Despite being out of scope, it would be nice to have some idea from
> the
> > CEP author of when users might still choose afresh 2i or SASI over SAI,
> >  - Who fills the roles involved? Who are the contributors in this
> DataStax
> > team? Who is the shepherd? Are there other stakeholders willing to be
> > involved?
> >  - Is there a preference to use gdoc instead of the project's wiki, and
> > why? (the CEP process suggest a wiki page, and feedback on why another
> > approach is considered better helps evolve the CEP process itself)
> >
> > cheers,
> > Mick
> >
>


Re: [DISCUSS] A point of view on Testing Cassandra

2020-08-18 Thread Joshua McKenzie
This totally dropped off my radar; the call out from the SAI thread
reminded me. Thanks Benedict.

I think you raised some great points here about what a "minimum viable
testing" might look like for a new feature:

> New features should be required to include randomised integration tests
> that exercise all of the functions of the feature in random combinations
> and verifies that the behaviour is consistent with expectation.  New
> functionality for an existing feature should augment any existing such
> tests to include the new functionality in its random exploration of
> behaviour.
>


> For a given system/feature/function, we should run with _every_ user
> option and every feature behaviour at least once;
>
Aim for testing all combinations of options and features if possible, fall
back to random combinations if not.

Seems like maybe some high level principles surface from the discussion
(I'm taking a bit of editorial liberty adding some things and intentionally
simplifying here):

For releases:

   - No perf regressions
   - A healthy (several hundred) corpus of user schemas tested in both
   mixed version and final stable version clusters
   - A tool to empower end users to test their schemas and workloads on
   mixed version and new version clusters prior to upgrading
   - Green test board
   - Adversarial testing


For new features:

   - All functions exercised w/random inputs and deliberate bad inputs
   - All functions exercised intentionally at boundary conditions
   - All functions exercised in a variety of failure and exception scenarios
   - Run with every user option and feature behavior at least once
   - Aim for testing all combinations of options and features if possible,
   fall back to random combinations if not
   - At least N% code coverage on tests

Maybe some of the above will prove useful or validating for the work you're
doing on articulating a tactical PoV on testing on the project Benedict.

On Thu, Jul 16, 2020 at 5:26 AM Benedict Elliott Smith 
wrote:

> Thanks for getting the ball rolling.  I think we need to be a lot more
> specific, though, and it may take some time to hash it all out.
>
> For starters we need to distinguish between types of "done" - are we
> discussing:
>  - Release
>  - New Feature
>  - New Functionality (for an existing feature)
>  - Performance Improvement
>  - Minor refactor
>  - Bug fix
>
> ?  All of these (perhaps more) require unique criteria in my opinion.
>
> For example:
>  - New features should be required to include randomised integration tests
> that exercise all of the functions of the feature in random combinations
> and verifies that the behaviour is consistent with expectation.  New
> functionality for an existing feature should augment any existing such
> tests to include the new functionality in its random exploration of
> behaviour.
>  - Releases are more suitable for many of your cluster-level tests, IMO,
> particularly if we get regular performance regression tests running against
> trunk (something for a shared roadmap)
>
> Then, there are various things that need specifying more clearly, e.g.:
>
> > Minimum 75% code coverage on non-boilerplate code
> Coverage by what? In my model, randomised integration tests of the
> relevant feature, but we need to agree specifically. Some thoughts:
>  - Not clear the value of code coverage measures, but 75% perhaps an
> acceptable arbitrary number if we want a lower bound
>  - More pertinent measure is options and behaviours
> - For a given system/feature/function, we should run with _every_ user
> option and every feature behaviour at least once;
> - Where tractable, exhaustive coverage (every combination of option,
> with every logical behaviour);
> - Where not possible, random combinations of options and behaviours.
>
> > - Some form of the above in mixed-version clusters
> I think we need to include mixed-schema, and modified-schema clusters as
> well, as this is a significant source of bugs
>
> > aggressively adversarial scenarios
> As far as chaos is concerned, I hope to bring an addition to in-jvm dtests
> soon, that should facilitate this for more targeted correctness tests - so
> problems can be surfaced more rapidly and repeatably.  Also with much less
> hardware :)
>
>
> On 15/07/2020, 22:35, "Joshua McKenzie"  wrote:
>
> I like that the "we need a Definition of Done" seems to be surfacing.
> No
> directed intent from opening this thread but it seems a serendipitous
> outcome. And to reiterate - I didn't open this thread with the hope or
> intent of getting all of us to agree on anything or explore what we
> should
> or shouldn't agree on. That's not my place nor is it historically how
> we
> seem to operate. :) Just looking to share a PoV so other project
> participants know about some work coming down the pipe and can engage
> if
> they're interested.
>
> Brainstorming here to get discussion started, which we could drop in a
> doc
>   

Re: [DISCUSS] Revisiting Java 11's experimental status

2020-08-18 Thread Joshua McKenzie
Where did we land on this? Don't seem to have a clear consensus from thread
discussion.

On Mon, Jul 20, 2020 at 10:02 PM Deepak Vohra 
wrote:

>  The same link was posted earlier also.
> For Java 8 and 11 the poll result is very similar.
> Java 8 =58.4%Java 11 =22.56%
>
>
> On Monday, July 20, 2020, 04:38:03 p.m. PDT, Joshua McKenzie <
> jmcken...@apache.org> wrote:
>
>  That's remarkably close to the jrebel results for 2020:
>
> https://www.jrebel.com/blog/2020-java-technology-report#java-version
>
>  Came across this this past weekend doing unrelated research; can't vouch
> for the accuracy / methods / etc.
>
>
> On Mon, Jul 20, 2020 at 7:32 PM Jeff Jirsa  wrote:
>
> > Got it, thanks for the correction.
> >
> >
> > On Mon, Jul 20, 2020 at 4:28 PM Brandon Williams 
> wrote:
> >
> > > I believe you can run them on 11, but you can't build them on it.
> > >
> > > On Mon, Jul 20, 2020 at 6:11 PM Jeff Jirsa  wrote:
> > > >
> > > > I still dont get it, because you can't use any released version of
> > > > cassandra with anything other than jdk8.
> > > >
> > > >
> > > >
> > > > On Mon, Jul 20, 2020 at 2:50 PM Patrick McFadin 
> > > wrote:
> > > >
> > > > > Follow-up on the informal poll I did on twitter:
> > > > > https://twitter.com/patrickmcfadin/status/1282791302065557504?s=21
> > > > >
> > > > > Offered up as data to be used as you will.
> > > > >
> > > > > 161 votes
> > > > > <= JDK8: 59%
> > > > > JDK9 or 10: 7%
> > > > > JDK11 or 12: 27%
> > > > > JDK13 or 14: 7%
> > > > >
> > > > >
> > > > >
> > > > > On Wed, Jul 15, 2020 at 3:19 AM Robert Stupp 
> wrote:
> > > > >
> > > > > > Yea, ZGC is kinda tricky in 11.
> > > > > >
> > > > > > —
> > > > > > Robert Stupp
> > > > > > @snazy
> > > > > >
> > > > > > > On 14. Jul 2020, at 15:02, Jeff Jirsa 
> wrote:
> > > > > > >
> > > > > > > Zgc
> > > > > > >
> > > > > > >> On Jul 14, 2020, at 2:26 AM, Robert Stupp 
> > wrote:
> > > > > > >>
> > > > > > >> 
> > > > > > >>> On 14. Jul 2020, at 07:33, Jeff Jirsa 
> > wrote:
> > > > > > >>>
> > > > > > >>> Perhaps the most notable parts of jdk11 (for cassandra)
> aren’t
> > > even
> > > > > > prod ready in jdk11 , so what’s the motivation and what does the
> > > project
> > > > > > gain from revisiting the experimental designation on jdk11?
> > > > > > >>
> > > > > > >> Can you elaborate on what’s not even prod ready in Java 11?
> > > > > > >
> > > > > > >
> > > -
> > > > > > > To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
> > > > > > > For additional commands, e-mail: dev-h...@cassandra.apache.org
> > > > > > >
> > > > > >
> > > > > >
> > > > >
> > >
> > > -
> > > To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
> > > For additional commands, e-mail: dev-h...@cassandra.apache.org
> > >
> > >
> >


Re: [DISCUSS] Revisiting Java 11's experimental status

2020-08-18 Thread David Capwell
I would propose the following:

1) leave jdk 11 marked as experimental
2) make sure CI runs jdk 8 and jdk 11 for all builds (circle / jenkins)
3) during 4.0 qualification, issues found on jdk 11 should block the release

This should get us in good shape to potentially be ready to flip the switch
in 4.1 or even 4.0.1; given that not everyone is signing up to test java
11, #3 might not be enough to fully mark stable.

On Tue, Aug 18, 2020 at 6:10 PM Joshua McKenzie 
wrote:

> Where did we land on this? Don't seem to have a clear consensus from thread
> discussion.
>
> On Mon, Jul 20, 2020 at 10:02 PM Deepak Vohra 
> wrote:
>
> >  The same link was posted earlier also.
> > For Java 8 and 11 the poll result is very similar.
> > Java 8 =58.4%Java 11 =22.56%
> >
> >
> > On Monday, July 20, 2020, 04:38:03 p.m. PDT, Joshua McKenzie <
> > jmcken...@apache.org> wrote:
> >
> >  That's remarkably close to the jrebel results for 2020:
> >
> > https://www.jrebel.com/blog/2020-java-technology-report#java-version
> >
> >  Came across this this past weekend doing unrelated research; can't vouch
> > for the accuracy / methods / etc.
> >
> >
> > On Mon, Jul 20, 2020 at 7:32 PM Jeff Jirsa  wrote:
> >
> > > Got it, thanks for the correction.
> > >
> > >
> > > On Mon, Jul 20, 2020 at 4:28 PM Brandon Williams 
> > wrote:
> > >
> > > > I believe you can run them on 11, but you can't build them on it.
> > > >
> > > > On Mon, Jul 20, 2020 at 6:11 PM Jeff Jirsa  wrote:
> > > > >
> > > > > I still dont get it, because you can't use any released version of
> > > > > cassandra with anything other than jdk8.
> > > > >
> > > > >
> > > > >
> > > > > On Mon, Jul 20, 2020 at 2:50 PM Patrick McFadin <
> pmcfa...@gmail.com>
> > > > wrote:
> > > > >
> > > > > > Follow-up on the informal poll I did on twitter:
> > > > > >
> https://twitter.com/patrickmcfadin/status/1282791302065557504?s=21
> > > > > >
> > > > > > Offered up as data to be used as you will.
> > > > > >
> > > > > > 161 votes
> > > > > > <= JDK8: 59%
> > > > > > JDK9 or 10: 7%
> > > > > > JDK11 or 12: 27%
> > > > > > JDK13 or 14: 7%
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Wed, Jul 15, 2020 at 3:19 AM Robert Stupp 
> > wrote:
> > > > > >
> > > > > > > Yea, ZGC is kinda tricky in 11.
> > > > > > >
> > > > > > > —
> > > > > > > Robert Stupp
> > > > > > > @snazy
> > > > > > >
> > > > > > > > On 14. Jul 2020, at 15:02, Jeff Jirsa 
> > wrote:
> > > > > > > >
> > > > > > > > Zgc
> > > > > > > >
> > > > > > > >> On Jul 14, 2020, at 2:26 AM, Robert Stupp 
> > > wrote:
> > > > > > > >>
> > > > > > > >> 
> > > > > > > >>> On 14. Jul 2020, at 07:33, Jeff Jirsa 
> > > wrote:
> > > > > > > >>>
> > > > > > > >>> Perhaps the most notable parts of jdk11 (for cassandra)
> > aren’t
> > > > even
> > > > > > > prod ready in jdk11 , so what’s the motivation and what does
> the
> > > > project
> > > > > > > gain from revisiting the experimental designation on jdk11?
> > > > > > > >>
> > > > > > > >> Can you elaborate on what’s not even prod ready in Java 11?
> > > > > > > >
> > > > > > > >
> > > > -
> > > > > > > > To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
> > > > > > > > For additional commands, e-mail:
> dev-h...@cassandra.apache.org
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > >
> > > > -
> > > > To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
> > > > For additional commands, e-mail: dev-h...@cassandra.apache.org
> > > >
> > > >
> > >
>


Re: Cassandra Kubernetes SIG - status update

2020-08-18 Thread Cyril Scetbon
Hey John,

I’m also in the middle of evaluating this image as I want to test cassandra 
4.0-beta1 and am trying to avoid having to handle changes at the configuration 
level (deprecated/renamed parameters for instance). I also have an issue with 
the fact the seed_provider class is not parameterized. See 
https://github.com/datastax/cass-config-definitions/issues/19 
.

Did you happen to fork the project with that change ?

Thanks
—
Cyril Scetbon

> On Aug 5, 2020, at 11:51 PM, John Sanda  wrote:
> 
> cass-config-builder configures Cassandra to
> use org.apache.cassandra.locator.K8SeedProvider. This class however is
> defined in management-api-for-apache-cassandra
>  >, i.e.,
> the DataStax management sidecar. I am not using the management sidecar yet,
> but updated my C* image to include the agent JAR which contains the
> K8SeedProvider class. I am still trying to iron out some of the wrinkles.



Re: Cassandra Kubernetes SIG - status update

2020-08-18 Thread Christopher Bradford
It sounds like he just included the agent jar which has this class on the
CLASSPATH. This feels like an enhancement for the definitions config
builder uses to allow for configuration of the seed provider.

On Tue, Aug 18, 2020 at 11:00 PM Cyril Scetbon 
wrote:

> Hey John,
>
>
>
> I’m also in the middle of evaluating this image as I want to test
> cassandra 4.0-beta1 and am trying to avoid having to handle changes at the
> configuration level (deprecated/renamed parameters for instance). I also
> have an issue with the fact the seed_provider class is not parameterized.
> See https://github.com/datastax/cass-config-definitions/issues/19 <
> https://github.com/datastax/cass-config-definitions/issues/19>.
>
>
>
> Did you happen to fork the project with that change ?
>
>
>
> Thanks
>
> —
>
> Cyril Scetbon
>
>
>
> > On Aug 5, 2020, at 11:51 PM, John Sanda  wrote:
>
> >
>
> > cass-config-builder configures Cassandra to
>
> > use org.apache.cassandra.locator.K8SeedProvider. This class however is
>
> > defined in management-api-for-apache-cassandra
>
> >  https://github.com/datastax/management-api-for-apache-cassandra>>, i.e.,
>
> > the DataStax management sidecar. I am not using the management sidecar
> yet,
>
> > but updated my C* image to include the agent JAR which contains the
>
> > K8SeedProvider class. I am still trying to iron out some of the wrinkles.
>
>
>
> --

Christopher Bradford