QA signup

2018-09-06 Thread Jonathan Haddad
For 4.0, I'm thinking it would be a good idea to put together a list of the
things that need testing and see if people are willing to help test / break
those things.  My goal here is to get as much coverage as possible, and let
folks focus on really hammering on specific things rather than just firing
up a cluster and rubber stamping it.  If we're going to be able to
confidently deploy 4.0 quickly after it's release we're going to need a
high attention to detail.

In addition to a signup sheet, I think providing some guidance on how to QA
each thing that's being tested would go a long way.  Throwing "hey please
test sstable streaming" over the wall will only get quality feedback from
folks that are already heavily involved in the development process.  It
would be nice to bring some new faces into the project by providing a
little guidance.

We could help facilitate this even further by considering the people
signing up to test a particular feature as a team, with seasoned Cassandra
veterans acting as team leads.

Any thoughts?  I'm happy to take the lead on this.
-- 
Jon Haddad
http://www.rustyrazorblade.com
twitter: rustyrazorblade


Re: QA signup

2018-09-06 Thread Varun Barala
+1
I personally would like to contribute.

On Thu, Sep 6, 2018 at 8:51 PM Jonathan Haddad  wrote:

> For 4.0, I'm thinking it would be a good idea to put together a list of the
> things that need testing and see if people are willing to help test / break
> those things.  My goal here is to get as much coverage as possible, and let
> folks focus on really hammering on specific things rather than just firing
> up a cluster and rubber stamping it.  If we're going to be able to
> confidently deploy 4.0 quickly after it's release we're going to need a
> high attention to detail.
>
> In addition to a signup sheet, I think providing some guidance on how to QA
> each thing that's being tested would go a long way.  Throwing "hey please
> test sstable streaming" over the wall will only get quality feedback from
> folks that are already heavily involved in the development process.  It
> would be nice to bring some new faces into the project by providing a
> little guidance.
>
> We could help facilitate this even further by considering the people
> signing up to test a particular feature as a team, with seasoned Cassandra
> veterans acting as team leads.
>
> Any thoughts?  I'm happy to take the lead on this.
> --
> Jon Haddad
> http://www.rustyrazorblade.com
> twitter: rustyrazorblade
>


Re: QA signup

2018-09-06 Thread Jordan West
Thanks for staring this thread Jon!

On Thu, Sep 6, 2018 at 5:51 AM Jonathan Haddad  wrote:

> For 4.0, I'm thinking it would be a good idea to put together a list of the
> things that need testing and see if people are willing to help test / break
> those things.  My goal here is to get as much coverage as possible, and let
> folks focus on really hammering on specific things rather than just firing
> up a cluster and rubber stamping it.  If we're going to be able to
> confidently deploy 4.0 quickly after it's release we're going to need a
> high attention to detail.
>
>
+1 to a more coordinated effort. I think we could use the Confluence that
was set up a little bit ago since it was setup for this purpose, at least
for finalized plans and results:
https://cwiki.apache.org/confluence/display/CASSANDRA.


> In addition to a signup sheet, I think providing some guidance on how to QA
> each thing that's being tested would go a long way.  Throwing "hey please
> test sstable streaming" over the wall will only get quality feedback from
> folks that are already heavily involved in the development process.  It
> would be nice to bring some new faces into the project by providing a
> little guidance.
>

> We could help facilitate this even further by considering the people
> signing up to test a particular feature as a team, with seasoned Cassandra
> veterans acting as team leads.
>

+1 to this as well. I am always a fan of folks learning about a
subsystem/project through testing. It can be challenging to get folks new
to a project excited about testing first but for those that do, or for
committers who want to learn another part of the db, its a great way to
learn.

Another thing we can do here is make sure teams are writing about the
testing they are doing and their results. This will help share knowledge
about techniques and approaches that others can then apply. This knowledge
can be shared on the mailing list, a blog post, or in JIRA.

 Jordan


> Any thoughts?  I'm happy to take the lead on this.
> --
> Jon Haddad
> http://www.rustyrazorblade.com
> twitter: rustyrazorblade
>


Re: Java 11 Z garbage collector

2018-09-06 Thread Carl Mueller
Thanks Jeff.

On Fri, Aug 31, 2018 at 1:01 PM Jeff Jirsa  wrote:

> Read heavy workload with wider partitions (like 1-2gb) and disable the key
> cache will be worst case for GC
>
>
>
>
> --
> Jeff Jirsa
>
>
> > On Aug 31, 2018, at 10:51 AM, Carl Mueller 
> > 
> wrote:
> >
> > I'm assuming that p99 that Rocksandra tries to target is caused by GC
> > pauses, does anyone have data patterns or datasets that will generate GC
> > pauses in Cassandra to highlight the abilities of Rocksandra (and...
> > Scylla?) and perhaps this GC approach?
> >
> > On Thu, Aug 30, 2018 at 8:11 PM Carl Mueller <
> carl.muel...@smartthings.com>
> > wrote:
> >
> >> Oh nice, I'll check that out.
> >>
> >> On Thu, Aug 30, 2018 at 11:07 AM Jonathan Haddad 
> >> wrote:
> >>
> >>> Advertised, yes, but so far I haven't found it to be any better than
> >>> ParNew + CMS or G1 in the performance tests I did when writing
> >>> http://thelastpickle.com/blog/2018/08/16/java11.html.
> >>>
> >>> That said, I didn't try it with a huge heap (i think it was 16 or
> 24GB),
> >>> so
> >>> maybe it'll do better if I throw 50 GB RAM at it.
> >>>
> >>>
> >>>
> >>> On Thu, Aug 30, 2018 at 8:42 AM Carl Mueller
> >>>  wrote:
> >>>
>  https://www.opsian.com/blog/javas-new-zgc-is-very-exciting/
> 
>  .. max of 4ms for stop the world, large terabyte heaps, seems
> promising.
> 
>  Will this be a major boon to cassandra p99 times? Anyone know the
> >>> aspects
>  of cassandra that cause the most churn and lead to StopTheWorld GC? I
> >>> was
>  under the impression that bloom filters, caches, etc are statically
>  allocated at startup.
> 
> >>>
> >>>
> >>> --
> >>> Jon Haddad
> >>> http://www.rustyrazorblade.com
> >>> twitter: rustyrazorblade
> >>>
> >>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: dev-h...@cassandra.apache.org
>
>


Re: QA signup

2018-09-06 Thread Jonathan Haddad
I was thinking along the same lines.  For this to be successful I think
either weekly or bi-weekly summary reports back to the mailing list by the
team lead for each subsection on what's been tested and how it's been
tested will help keep things moving along.

In my opinion the lead for each team should *not* be the contributor that
wrote the feature, but someone who's very interested in it and can use the
contributor as a resource.  I think it would be difficult for the
contributor to poke holes in their own work - if they could do that it
would have been done already.  This should be a verification process that's
independent as possible from the original work.

In addition to the QA process, it would be great if we could get a docs
team together.  We've got quite a bit of undocumented features and nuance
still, I think hammering that out would be a good idea.  Mick brought up
updating the website docs in the thread on testing different JDK's [1], if
we could figure that out in the process we'd be in a really great position
from the user perspective.

Jon

[1]
https://lists.apache.org/thread.html/5645178efb57939b96e73ab9c298e80ad8e76f11a563b4d250c1ae38@%3Cdev.cassandra.apache.org%3E

On Thu, Sep 6, 2018 at 10:35 AM Jordan West  wrote:

> Thanks for staring this thread Jon!
>
> On Thu, Sep 6, 2018 at 5:51 AM Jonathan Haddad  wrote:
>
> > For 4.0, I'm thinking it would be a good idea to put together a list of
> the
> > things that need testing and see if people are willing to help test /
> break
> > those things.  My goal here is to get as much coverage as possible, and
> let
> > folks focus on really hammering on specific things rather than just
> firing
> > up a cluster and rubber stamping it.  If we're going to be able to
> > confidently deploy 4.0 quickly after it's release we're going to need a
> > high attention to detail.
> >
> >
> +1 to a more coordinated effort. I think we could use the Confluence that
> was set up a little bit ago since it was setup for this purpose, at least
> for finalized plans and results:
> https://cwiki.apache.org/confluence/display/CASSANDRA.
>
>
> > In addition to a signup sheet, I think providing some guidance on how to
> QA
> > each thing that's being tested would go a long way.  Throwing "hey please
> > test sstable streaming" over the wall will only get quality feedback from
> > folks that are already heavily involved in the development process.  It
> > would be nice to bring some new faces into the project by providing a
> > little guidance.
> >
>
> > We could help facilitate this even further by considering the people
> > signing up to test a particular feature as a team, with seasoned
> Cassandra
> > veterans acting as team leads.
> >
>
> +1 to this as well. I am always a fan of folks learning about a
> subsystem/project through testing. It can be challenging to get folks new
> to a project excited about testing first but for those that do, or for
> committers who want to learn another part of the db, its a great way to
> learn.
>
> Another thing we can do here is make sure teams are writing about the
> testing they are doing and their results. This will help share knowledge
> about techniques and approaches that others can then apply. This knowledge
> can be shared on the mailing list, a blog post, or in JIRA.
>
>  Jordan
>
>
> > Any thoughts?  I'm happy to take the lead on this.
> > --
> > Jon Haddad
> > http://www.rustyrazorblade.com
> > twitter: rustyrazorblade
> >
>


-- 
Jon Haddad
http://www.rustyrazorblade.com
twitter: rustyrazorblade


Re: QA signup

2018-09-06 Thread sankalp kohli
Thanks for starting this Jon.
Instead of saying "I tested streaming", we should define what all was
tested like was all data transferred, what happened when stream failed,
etc.
Based on talking to a few users, looks like most testing is done by doing
an operation or running a load and seeing if it "worked" and no errors in
logs.

Another important thing will be to fix bugs asap ahead of testing,  as
fixes can lead to more bugs :)

On Thu, Sep 6, 2018 at 7:52 AM Jonathan Haddad  wrote:

> I was thinking along the same lines.  For this to be successful I think
> either weekly or bi-weekly summary reports back to the mailing list by the
> team lead for each subsection on what's been tested and how it's been
> tested will help keep things moving along.
>
> In my opinion the lead for each team should *not* be the contributor that
> wrote the feature, but someone who's very interested in it and can use the
> contributor as a resource.  I think it would be difficult for the
> contributor to poke holes in their own work - if they could do that it
> would have been done already.  This should be a verification process that's
> independent as possible from the original work.
>
> In addition to the QA process, it would be great if we could get a docs
> team together.  We've got quite a bit of undocumented features and nuance
> still, I think hammering that out would be a good idea.  Mick brought up
> updating the website docs in the thread on testing different JDK's [1], if
> we could figure that out in the process we'd be in a really great position
> from the user perspective.
>
> Jon
>
> [1]
>
> https://lists.apache.org/thread.html/5645178efb57939b96e73ab9c298e80ad8e76f11a563b4d250c1ae38@%3Cdev.cassandra.apache.org%3E
>
> On Thu, Sep 6, 2018 at 10:35 AM Jordan West  wrote:
>
> > Thanks for staring this thread Jon!
> >
> > On Thu, Sep 6, 2018 at 5:51 AM Jonathan Haddad 
> wrote:
> >
> > > For 4.0, I'm thinking it would be a good idea to put together a list of
> > the
> > > things that need testing and see if people are willing to help test /
> > break
> > > those things.  My goal here is to get as much coverage as possible, and
> > let
> > > folks focus on really hammering on specific things rather than just
> > firing
> > > up a cluster and rubber stamping it.  If we're going to be able to
> > > confidently deploy 4.0 quickly after it's release we're going to need a
> > > high attention to detail.
> > >
> > >
> > +1 to a more coordinated effort. I think we could use the Confluence that
> > was set up a little bit ago since it was setup for this purpose, at least
> > for finalized plans and results:
> > https://cwiki.apache.org/confluence/display/CASSANDRA.
> >
> >
> > > In addition to a signup sheet, I think providing some guidance on how
> to
> > QA
> > > each thing that's being tested would go a long way.  Throwing "hey
> please
> > > test sstable streaming" over the wall will only get quality feedback
> from
> > > folks that are already heavily involved in the development process.  It
> > > would be nice to bring some new faces into the project by providing a
> > > little guidance.
> > >
> >
> > > We could help facilitate this even further by considering the people
> > > signing up to test a particular feature as a team, with seasoned
> > Cassandra
> > > veterans acting as team leads.
> > >
> >
> > +1 to this as well. I am always a fan of folks learning about a
> > subsystem/project through testing. It can be challenging to get folks new
> > to a project excited about testing first but for those that do, or for
> > committers who want to learn another part of the db, its a great way to
> > learn.
> >
> > Another thing we can do here is make sure teams are writing about the
> > testing they are doing and their results. This will help share knowledge
> > about techniques and approaches that others can then apply. This
> knowledge
> > can be shared on the mailing list, a blog post, or in JIRA.
> >
> >  Jordan
> >
> >
> > > Any thoughts?  I'm happy to take the lead on this.
> > > --
> > > Jon Haddad
> > > http://www.rustyrazorblade.com
> > > twitter: rustyrazorblade
> > >
> >
>
>
> --
> Jon Haddad
> http://www.rustyrazorblade.com
> twitter: rustyrazorblade
>


Re: QA signup

2018-09-06 Thread Jonathan Haddad
I completely agree with you, Sankalp.  I didn't want to dig too deep into
the underlying testing methodology (and I still think we shouldn't just
yet) but if the goal is to have confidence in the release, our QA process
needs to be comprehensive.

I believe that having focused teams for each component with a team leader
with support from committers & contributors gives us the best shot at
defining large scale functional tests that can be used to form both
progress and bug reports.  (A person could / hopefully will be on more than
one team).  Coming up with those comprehensive tests will be the jobs of
the teams, getting frequent bidirectional feedback on the dev ML.  Bugs go
in JIRA as per usual.

Hopefully we can continue this process after the release, giving the
project more structure, and folding more people in over time as
contributors and ideally committers / PMC.

Jon


On Thu, Sep 6, 2018 at 1:15 PM sankalp kohli  wrote:

> Thanks for starting this Jon.
> Instead of saying "I tested streaming", we should define what all was
> tested like was all data transferred, what happened when stream failed,
> etc.
> Based on talking to a few users, looks like most testing is done by doing
> an operation or running a load and seeing if it "worked" and no errors in
> logs.
>
> Another important thing will be to fix bugs asap ahead of testing,  as
> fixes can lead to more bugs :)
>
> On Thu, Sep 6, 2018 at 7:52 AM Jonathan Haddad  wrote:
>
> > I was thinking along the same lines.  For this to be successful I think
> > either weekly or bi-weekly summary reports back to the mailing list by
> the
> > team lead for each subsection on what's been tested and how it's been
> > tested will help keep things moving along.
> >
> > In my opinion the lead for each team should *not* be the contributor that
> > wrote the feature, but someone who's very interested in it and can use
> the
> > contributor as a resource.  I think it would be difficult for the
> > contributor to poke holes in their own work - if they could do that it
> > would have been done already.  This should be a verification process
> that's
> > independent as possible from the original work.
> >
> > In addition to the QA process, it would be great if we could get a docs
> > team together.  We've got quite a bit of undocumented features and nuance
> > still, I think hammering that out would be a good idea.  Mick brought up
> > updating the website docs in the thread on testing different JDK's [1],
> if
> > we could figure that out in the process we'd be in a really great
> position
> > from the user perspective.
> >
> > Jon
> >
> > [1]
> >
> >
> https://lists.apache.org/thread.html/5645178efb57939b96e73ab9c298e80ad8e76f11a563b4d250c1ae38@%3Cdev.cassandra.apache.org%3E
> >
> > On Thu, Sep 6, 2018 at 10:35 AM Jordan West  wrote:
> >
> > > Thanks for staring this thread Jon!
> > >
> > > On Thu, Sep 6, 2018 at 5:51 AM Jonathan Haddad 
> > wrote:
> > >
> > > > For 4.0, I'm thinking it would be a good idea to put together a list
> of
> > > the
> > > > things that need testing and see if people are willing to help test /
> > > break
> > > > those things.  My goal here is to get as much coverage as possible,
> and
> > > let
> > > > folks focus on really hammering on specific things rather than just
> > > firing
> > > > up a cluster and rubber stamping it.  If we're going to be able to
> > > > confidently deploy 4.0 quickly after it's release we're going to
> need a
> > > > high attention to detail.
> > > >
> > > >
> > > +1 to a more coordinated effort. I think we could use the Confluence
> that
> > > was set up a little bit ago since it was setup for this purpose, at
> least
> > > for finalized plans and results:
> > > https://cwiki.apache.org/confluence/display/CASSANDRA.
> > >
> > >
> > > > In addition to a signup sheet, I think providing some guidance on how
> > to
> > > QA
> > > > each thing that's being tested would go a long way.  Throwing "hey
> > please
> > > > test sstable streaming" over the wall will only get quality feedback
> > from
> > > > folks that are already heavily involved in the development process.
> It
> > > > would be nice to bring some new faces into the project by providing a
> > > > little guidance.
> > > >
> > >
> > > > We could help facilitate this even further by considering the people
> > > > signing up to test a particular feature as a team, with seasoned
> > > Cassandra
> > > > veterans acting as team leads.
> > > >
> > >
> > > +1 to this as well. I am always a fan of folks learning about a
> > > subsystem/project through testing. It can be challenging to get folks
> new
> > > to a project excited about testing first but for those that do, or for
> > > committers who want to learn another part of the db, its a great way to
> > > learn.
> > >
> > > Another thing we can do here is make sure teams are writing about the
> > > testing they are doing and their results. This will help share
> knowledge
> > > about techniques and approaches t

Re: Supporting multiple JDKs

2018-09-06 Thread Sumanth Pasupuleti
> And I would suggest to go further and crash the build with JDK1.7 so we
can take away the possibility for users to shoot their foot off this way.

I like this suggestion. Either we should be on the side of NO support to
JDK 1.7, or if we say we support JDK1.7, I believe we should be building
against JDK1.7 to make sure we are compliant.
I have a quick clarifying question here - I believe origin of
CASSANDRA-14563 is from the introduction of an API in 2.2 that is
incompatible with 1.7, that has then been manually detected and fixed. Are
you suggesting, going further, we would not support 1.7?

> Currently I'm unclear on how we would make a stable release using only
JDK8, maybe their are plans on the table i don't know about?

>From the current state of build.xml and from the past discussions, I do
believe as well, that we need both JDKs to make a 4.0 release using
‘_build_multi_java’. Bonus would be that, the release would also be able to
run against Java11, but that would be an experimental release.

> I'm not familiar with optional jobs or workflows in CircleCi, do you have
an example of what you mean at hand?

By optional, I was referring to having workflow definitions in place, but
calls to those workflows commented out. Basically similar to what we have
today.
workflows:
version: 2
build_and_run_tests: *default_jobs
#build_and_run_tests: *with_dtest_jobs_only
#build_and_run_tests_java11: *with_dtest_jobs_java11
Jason created CASSANDRA-14609 for this purpose I believe.

> Off-topic, but what are your thoughts on this? Can we add `ant
artifacts`, and the building of the docs, as a separate jobs into the
existing default CircleCI workflow? I think we should also be looking into
getting https://cassandra.apache.org/doc/latest/ automatically updated
after each successful trunk build, and have
https://cassandra.apache.org/doc/X.Y versions on the docs in place (which
are only updated after each patch release).

I like all these ideas! I believe we should be able to add a workflow to
test out artifact generation. Will create a JIRA for this. Your suggestions
around auto-update of docs provides a way to keep our website docs
up-to-date. Not sure what it takes to do it though. Will be happy to
explore (as part of separate JIRAs).

Thanks,
Sumanth

On Wed, Sep 5, 2018 at 9:30 PM Mick Semb Wever  wrote:

>
>
> > How would we be sure users will never encounter bugs unless we build
> > against that JDK?
>
>
> Apache Cassandra does not distribute JDK1.7 built releases.
>
> The only way a user could repeat such a bug is if they have built C*
> themselves.
>
> I don't think the project should be responsible for every possible build
> combination tom, dick and harry can do.
> That's my 2cents anyway.
>
> And I would suggest to go further and crash the build with JDK1.7 so we
> can take away the possibility for users to shoot their foot off this way.
>
>
> > > The time it takes for tests to run is a headache, so to have to run
> > dtests four times over makes me grimace.
> > It takes only about 25min with default 4x parallelism to run unit tests
> in
> > CircleCI.
>
> I referred to dtests, how would you do this on CircleCI?
> Today dtests take 5-9 hours on builds.apache.org, not including re-runs
> for offheap, large, novnode.
>
>
> > We definitely can build against JDK 8 alone, however from the thread you
> > linked and from 9608, we wanted to do a stable release that uses JDK8,
> and
> > an experimental release, which uses JDK8 to build most files, and JDK11
> to
> > build the Java 11 specific AtomicBTreePartitionBase file.
>
> Currently I'm unclear on how we would make a stable release using only
> JDK8, maybe their are plans on the table i don't know about?
>
> The current build.xml requires both JDKs to run `ant artifacts`.
> That is any release will have compiled in ant all but one class with
> `_build_multi_java` instead of `_build_java8_only`.
>
>
> > My proposal is not to necessarily run UTs and DTests against JDK11 always
> > with every commit but to have workflows in place that can be used
> whenever
> > we deem necessary.
>
>
> I'm not familiar with optional jobs or workflows in CircleCi, do you have
> an example of what you mean at hand?
> I like the idea of having a collection of CircleCi workflows, even if I'd
> rather see less JDKs supported at compile-time.
>
>
> > I think building the artefacts should be part of the CI build step
> because patches are not always about java code.
>
> Off-topic, but what are your thoughts on this? Can we add `ant artifacts`,
> and the building of the docs, as a separate jobs into the existing default
> CircleCI workflow? I think we should also be looking into getting
> https://cassandra.apache.org/doc/latest/ automatically updated after each
> successful trunk build, and have https://cassandra.apache.org/doc/X.Y
> versions on the docs in place (which are only updated after each patch
> release).
>
> regards,
> Mick
>
> ---

Re: QA signup

2018-09-06 Thread J. D. Jordan
I would suggest that JIRA’s tagged as 4.0 blockers be created for the list once 
it is fleshed out.  Test plans and results could be posted to said JIRAs, to be 
closed once a given test passes. Any bugs found can also then be related back 
to such a ticket for tracking them as well.

-Jeremiah

> On Sep 6, 2018, at 12:27 PM, Jonathan Haddad  wrote:
> 
> I completely agree with you, Sankalp.  I didn't want to dig too deep into
> the underlying testing methodology (and I still think we shouldn't just
> yet) but if the goal is to have confidence in the release, our QA process
> needs to be comprehensive.
> 
> I believe that having focused teams for each component with a team leader
> with support from committers & contributors gives us the best shot at
> defining large scale functional tests that can be used to form both
> progress and bug reports.  (A person could / hopefully will be on more than
> one team).  Coming up with those comprehensive tests will be the jobs of
> the teams, getting frequent bidirectional feedback on the dev ML.  Bugs go
> in JIRA as per usual.
> 
> Hopefully we can continue this process after the release, giving the
> project more structure, and folding more people in over time as
> contributors and ideally committers / PMC.
> 
> Jon
> 
> 
>> On Thu, Sep 6, 2018 at 1:15 PM sankalp kohli  wrote:
>> 
>> Thanks for starting this Jon.
>> Instead of saying "I tested streaming", we should define what all was
>> tested like was all data transferred, what happened when stream failed,
>> etc.
>> Based on talking to a few users, looks like most testing is done by doing
>> an operation or running a load and seeing if it "worked" and no errors in
>> logs.
>> 
>> Another important thing will be to fix bugs asap ahead of testing,  as
>> fixes can lead to more bugs :)
>> 
 On Thu, Sep 6, 2018 at 7:52 AM Jonathan Haddad  wrote:
>>> 
>>> I was thinking along the same lines.  For this to be successful I think
>>> either weekly or bi-weekly summary reports back to the mailing list by
>> the
>>> team lead for each subsection on what's been tested and how it's been
>>> tested will help keep things moving along.
>>> 
>>> In my opinion the lead for each team should *not* be the contributor that
>>> wrote the feature, but someone who's very interested in it and can use
>> the
>>> contributor as a resource.  I think it would be difficult for the
>>> contributor to poke holes in their own work - if they could do that it
>>> would have been done already.  This should be a verification process
>> that's
>>> independent as possible from the original work.
>>> 
>>> In addition to the QA process, it would be great if we could get a docs
>>> team together.  We've got quite a bit of undocumented features and nuance
>>> still, I think hammering that out would be a good idea.  Mick brought up
>>> updating the website docs in the thread on testing different JDK's [1],
>> if
>>> we could figure that out in the process we'd be in a really great
>> position
>>> from the user perspective.
>>> 
>>> Jon
>>> 
>>> [1]
>> https://lists.apache.org/thread.html/5645178efb57939b96e73ab9c298e80ad8e76f11a563b4d250c1ae38@%3Cdev.cassandra.apache.org%3E
>>> 
> On Thu, Sep 6, 2018 at 10:35 AM Jordan West  wrote:
 
 Thanks for staring this thread Jon!
 
> On Thu, Sep 6, 2018 at 5:51 AM Jonathan Haddad 
 wrote:
 
> For 4.0, I'm thinking it would be a good idea to put together a list
>> of
 the
> things that need testing and see if people are willing to help test /
 break
> those things.  My goal here is to get as much coverage as possible,
>> and
 let
> folks focus on really hammering on specific things rather than just
 firing
> up a cluster and rubber stamping it.  If we're going to be able to
> confidently deploy 4.0 quickly after it's release we're going to
>> need a
> high attention to detail.
 +1 to a more coordinated effort. I think we could use the Confluence
>> that
 was set up a little bit ago since it was setup for this purpose, at
>> least
 for finalized plans and results:
 https://cwiki.apache.org/confluence/display/CASSANDRA.
 
 
> In addition to a signup sheet, I think providing some guidance on how
>>> to
 QA
> each thing that's being tested would go a long way.  Throwing "hey
>>> please
> test sstable streaming" over the wall will only get quality feedback
>>> from
> folks that are already heavily involved in the development process.
>> It
> would be nice to bring some new faces into the project by providing a
> little guidance.
 
> We could help facilitate this even further by considering the people
> signing up to test a particular feature as a team, with seasoned
 Cassandra
> veterans acting as team leads.
 
 +1 to this as well. I am always a fan of folks learning about a
 subsystem/project through testing. It can be challenging to get folks
>> new
 to a pr

Re: QA signup

2018-09-06 Thread Dinesh Joshi
> On Sep 6, 2018, at 10:27 AM, Jonathan Haddad  wrote:
> 
> I completely agree with you, Sankalp.  I didn't want to dig too deep into
> the underlying testing methodology (and I still think we shouldn't just
> yet) but if the goal is to have confidence in the release, our QA process
> needs to be comprehensive.

I think it is critical to specify the parameters and methodology. We can start 
with the major areas that we'd like to test and later dig deeper into the 
specifics.

> I believe that having focused teams for each component with a team leader
> with support from committers & contributors gives us the best shot at
> defining large scale functional tests that can be used to form both
> progress and bug reports.  (A person could / hopefully will be on more than
> one team).  Coming up with those comprehensive tests will be the jobs of
> the teams, getting frequent bidirectional feedback on the dev ML.  Bugs go
> in JIRA as per usual.
> 
> Hopefully we can continue this process after the release, giving the
> project more structure, and folding more people in over time as
> contributors and ideally committers / PMC.

+1

Dinesh
-
To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
For additional commands, e-mail: dev-h...@cassandra.apache.org