Re: [DISCUSS] Future of MVs
> > "Make the scan faster" > "Make the scan incremental and automatic" > "Make it not blow up your page cache" > "Make losing your base replicas less likely". > > There's a concrete, real opportunity with MVs to create integrity > assertions we're missing. A dangling record from an MV that would point to > missing base data is something that could raise alarm bells and signal > JIRAs so we can potentially find and fix more surprise edge cases. > I agree with Jeff that there is some stuff to do to address the current MV issues and I am willing to focus on making them production ready. On Wed, Jul 1, 2020 at 2:58 AM wrote: > It would be incredibly helpful for us to have some empirical data and > agreed upon terms and benchmarks to help us navigate discussions like this: > > * How widely used is a feature in C* deployments worldwide? > * What are the primary issues users face when deploying them? Scaling > them? During failure scenarios? > * What does the engineering effort to bridge these gaps look like? Who > will do that? On what time horizon? > * What does our current test coverage for this feature look like? > * What shape of defects are arising with the feature? In a specific > subsection of the module or usage? > * Do we have an agreed upon set of standards for labeling a feature > stable? As experimental? If not, how do we get there? > * What effort will it take to bridge from where we are to where we agree > we need to be? On what timeline is this acceptable? > > I believe these are not only answerable questions, but fundamentally the > underlying themes our discussion alludes to. They’re also questions that > apply to a lot more than just MV’s and tie into what you’re speaking to > above Benedict. > > > > On Jun 30, 2020, at 8:32 PM, sankalp kohli > wrote: > > > > I see this discussion as several decisions which can be made in small > > increments. > > > > 1. In release cycles, when can we propose a feature to be deprecated or > > marked experimental. Ideally a new feature should come out experimental > if > > required but we have several who are candidates now. We can work on > > integrating this in the release lifecycle doc we already have. > > 2. What is the process of making an existing feature experimental? How > does > > it affect major releases around testing. > > 3. What is the process of deprecating/removing an experimental feature. > > (Assuming experimental features should be deprecated/removed) > > > > Coming to MV, I think we need more data before we can say we > > should deprecate MV. Here are some of them which should be part of > > deprecation process > > 1.Talk to customers who use them and understand what is the impact. Give > > them a forum to talk about it. > > 2. Do we have enough resources to bring this feature out of the > > experimental feature list in next 1 or 2 major releases. We cannot have > too > > many experimental features in the database. Marking a feature > experimental > > should not be a parking place for a non functioning feature but a place > > while we stabilize it. > > > > > > > > > >> On Tue, Jun 30, 2020 at 4:52 PM wrote: > >> > >> I followed up with the clarification about unit and dtests for that > reason > >> Dinesh. We test experimental features now. > >> > >> If we’re talking about adding experimental features to the 40 quality > >> testing effort, how does that differ from just saying “we won’t release > >> until we’ve tested and stabilized these features and they’re no longer > >> experimental”? > >> > >> Maybe I’m just misunderstanding something here? > >> > On Jun 30, 2020, at 7:12 PM, Dinesh Joshi wrote: > >>> > >>> > > On Jun 30, 2020, at 4:05 PM, Brandon Williams > wrote: > > Instead of ripping it out, we could instead disable them in the yaml > with big fat warning comments around it. That way people already > using them can just enable them again, but it will raise the bar for > new users who ignore/miss the warnings in the logs and just use them. > >>> > >>> Not a bad idea. Although, the real issue is that users enable MV on a 3 > >> node cluster with a few megs of data and conclude that MVs will > >> horizontally scale with the size of data. This is what causes issues for > >> users who naively roll it out in production and discover that MVs do not > >> scale with their data growth. So whatever we do, the big fat warning > should > >> educate the unsuspecting operator. > >>> > >>> Dinesh > >>> - > >>> To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org > >>> For additional commands, e-mail: dev-h...@cassandra.apache.org > >>> > >> > >> - > >> To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org > >> For additional commands, e-mail: dev-h...@cassandra.apache.org > >> > >> > > -
Re: [DISCUSS] Future of MVs
> I agree with Jeff that there is some stuff to do to address the current MV > issues and I am willing to focus on making them production ready. +1 On Wed, 1 Jul 2020 at 15:42, Benjamin Lerer wrote: > > > > "Make the scan faster" > > "Make the scan incremental and automatic" > > "Make it not blow up your page cache" > > "Make losing your base replicas less likely". > > > > There's a concrete, real opportunity with MVs to create integrity > > assertions we're missing. A dangling record from an MV that would point > to > > missing base data is something that could raise alarm bells and signal > > JIRAs so we can potentially find and fix more surprise edge cases. > > > > I agree with Jeff that there is some stuff to do to address the current MV > issues and I am willing to focus on making them production ready. > > > > > On Wed, Jul 1, 2020 at 2:58 AM wrote: > > > It would be incredibly helpful for us to have some empirical data and > > agreed upon terms and benchmarks to help us navigate discussions like > this: > > > > * How widely used is a feature in C* deployments worldwide? > > * What are the primary issues users face when deploying them? Scaling > > them? During failure scenarios? > > * What does the engineering effort to bridge these gaps look like? Who > > will do that? On what time horizon? > > * What does our current test coverage for this feature look like? > > * What shape of defects are arising with the feature? In a specific > > subsection of the module or usage? > > * Do we have an agreed upon set of standards for labeling a feature > > stable? As experimental? If not, how do we get there? > > * What effort will it take to bridge from where we are to where we > agree > > we need to be? On what timeline is this acceptable? > > > > I believe these are not only answerable questions, but fundamentally the > > underlying themes our discussion alludes to. They’re also questions that > > apply to a lot more than just MV’s and tie into what you’re speaking to > > above Benedict. > > > > > > > On Jun 30, 2020, at 8:32 PM, sankalp kohli > > wrote: > > > > > > I see this discussion as several decisions which can be made in small > > > increments. > > > > > > 1. In release cycles, when can we propose a feature to be deprecated or > > > marked experimental. Ideally a new feature should come out experimental > > if > > > required but we have several who are candidates now. We can work on > > > integrating this in the release lifecycle doc we already have. > > > 2. What is the process of making an existing feature experimental? How > > does > > > it affect major releases around testing. > > > 3. What is the process of deprecating/removing an experimental feature. > > > (Assuming experimental features should be deprecated/removed) > > > > > > Coming to MV, I think we need more data before we can say we > > > should deprecate MV. Here are some of them which should be part of > > > deprecation process > > > 1.Talk to customers who use them and understand what is the impact. > Give > > > them a forum to talk about it. > > > 2. Do we have enough resources to bring this feature out of the > > > experimental feature list in next 1 or 2 major releases. We cannot have > > too > > > many experimental features in the database. Marking a feature > > experimental > > > should not be a parking place for a non functioning feature but a place > > > while we stabilize it. > > > > > > > > > > > > > > >> On Tue, Jun 30, 2020 at 4:52 PM wrote: > > >> > > >> I followed up with the clarification about unit and dtests for that > > reason > > >> Dinesh. We test experimental features now. > > >> > > >> If we’re talking about adding experimental features to the 40 quality > > >> testing effort, how does that differ from just saying “we won’t > release > > >> until we’ve tested and stabilized these features and they’re no longer > > >> experimental”? > > >> > > >> Maybe I’m just misunderstanding something here? > > >> > > On Jun 30, 2020, at 7:12 PM, Dinesh Joshi > wrote: > > >>> > > >>> > > > > On Jun 30, 2020, at 4:05 PM, Brandon Williams > > wrote: > > > > Instead of ripping it out, we could instead disable them in the yaml > > with big fat warning comments around it. That way people already > > using them can just enable them again, but it will raise the bar for > > new users who ignore/miss the warnings in the logs and just use > them. > > >>> > > >>> Not a bad idea. Although, the real issue is that users enable MV on > a 3 > > >> node cluster with a few megs of data and conclude that MVs will > > >> horizontally scale with the size of data. This is what causes issues > for > > >> users who naively roll it out in production and discover that MVs do > not > > >> scale with their data growth. So whatever we do, the big fat warning > > should > > >> educate the unsuspecting operator. > > >>> > > >>> Dinesh > > >>>
Re: [DISCUSS] Future of MVs
I humbly suggest these are the wrong questions to ask. Instead, two sides of just one question matter: how did we miss these problems, and what would we have needed to do procedurally to have not missed it. Whatever it is, we need to do it now to have confidence other things were not missed, as well as for all future features. We should start by producing a list of what we think is necessary for deploying successful features. We can then determine what items are missing that would have been needed to catch a problem. Obvious things are: * integration tests at scale * integration tests with a variety of extreme workloads * integration tests with various cluster topologies * data integrity tests as part of the above * all of the above as reproducible tests incorporated into the source tree We can then ensure Jira accurately represents all of the known issues with MVs (and other features). This includes those that are poorly defined (such as "doesn't scale"). Then we can look at all issues and ask: would this approach have caught it, and if not what do we need to add to the guidelines to prevent a recurrence - and also ensure this problem is unique? In future we can ask, for bugs found in features built to these guidelines: why didn't it catch this bug? Do the guidelines need additional items, or greater specificity about how to meet given criteria? I do not think that data from deployments - even if reliably obtained - can tell us much besides which problems we prioritise. On 01/07/2020, 01:58, "joshua.mcken...@gmail.com" wrote: It would be incredibly helpful for us to have some empirical data and agreed upon terms and benchmarks to help us navigate discussions like this: * How widely used is a feature in C* deployments worldwide? * What are the primary issues users face when deploying them? Scaling them? During failure scenarios? * What does the engineering effort to bridge these gaps look like? Who will do that? On what time horizon? * What does our current test coverage for this feature look like? * What shape of defects are arising with the feature? In a specific subsection of the module or usage? * Do we have an agreed upon set of standards for labeling a feature stable? As experimental? If not, how do we get there? * What effort will it take to bridge from where we are to where we agree we need to be? On what timeline is this acceptable? I believe these are not only answerable questions, but fundamentally the underlying themes our discussion alludes to. They’re also questions that apply to a lot more than just MV’s and tie into what you’re speaking to above Benedict. > On Jun 30, 2020, at 8:32 PM, sankalp kohli wrote: > > I see this discussion as several decisions which can be made in small > increments. > > 1. In release cycles, when can we propose a feature to be deprecated or > marked experimental. Ideally a new feature should come out experimental if > required but we have several who are candidates now. We can work on > integrating this in the release lifecycle doc we already have. > 2. What is the process of making an existing feature experimental? How does > it affect major releases around testing. > 3. What is the process of deprecating/removing an experimental feature. > (Assuming experimental features should be deprecated/removed) > > Coming to MV, I think we need more data before we can say we > should deprecate MV. Here are some of them which should be part of > deprecation process > 1.Talk to customers who use them and understand what is the impact. Give > them a forum to talk about it. > 2. Do we have enough resources to bring this feature out of the > experimental feature list in next 1 or 2 major releases. We cannot have too > many experimental features in the database. Marking a feature experimental > should not be a parking place for a non functioning feature but a place > while we stabilize it. > > > > >> On Tue, Jun 30, 2020 at 4:52 PM wrote: >> >> I followed up with the clarification about unit and dtests for that reason >> Dinesh. We test experimental features now. >> >> If we’re talking about adding experimental features to the 40 quality >> testing effort, how does that differ from just saying “we won’t release >> until we’ve tested and stabilized these features and they’re no longer >> experimental”? >> >> Maybe I’m just misunderstanding something here? >> On Jun 30, 2020, at 7:12 PM, Dinesh Joshi wrote: >>> >>> On Jun 30, 2020, at 4:05 PM, Brandon Williams wrote: Instead of ripping it out, we could instead disable them in the yaml with big fat warning comments around it. That way people already using them can just enab
Re: [DISCUSS] Future of MVs
Which questions and how we frame it aside, it's clear we have some foundational thinking to do, articulate, and agree upon as a project before we can reasonably make decisions about deprecation, promotion, or inclusion of features in the project. Is that fair? If so, I propose we set this thread down for now in deference to us articulating the quality bar we set and how we achieve it for features in the DB and then retroactively apply them to existing experimental features. Should we determine nobody is stepping up to maintain an experimental feature in a reasonable time frame, we can cross the bridge of the implications of scale of adoption and the perceived impact on the user community of deprecation and removal at that time. On Wed, Jul 1, 2020 at 9:59 AM Benedict Elliott Smith wrote: > I humbly suggest these are the wrong questions to ask. Instead, two sides > of just one question matter: how did we miss these problems, and what would > we have needed to do procedurally to have not missed it. Whatever it is, > we need to do it now to have confidence other things were not missed, as > well as for all future features. > > We should start by producing a list of what we think is necessary for > deploying successful features. We can then determine what items are > missing that would have been needed to catch a problem. Obvious things > are: > > * integration tests at scale > * integration tests with a variety of extreme workloads > * integration tests with various cluster topologies > * data integrity tests as part of the above > * all of the above as reproducible tests incorporated into the source > tree > > We can then ensure Jira accurately represents all of the known issues with > MVs (and other features). This includes those that are poorly defined > (such as "doesn't scale"). > > Then we can look at all issues and ask: would this approach have caught > it, and if not what do we need to add to the guidelines to prevent a > recurrence - and also ensure this problem is unique? In future we can ask, > for bugs found in features built to these guidelines: why didn't it catch > this bug? Do the guidelines need additional items, or greater specificity > about how to meet given criteria? > > I do not think that data from deployments - even if reliably obtained - > can tell us much besides which problems we prioritise. > > > > On 01/07/2020, 01:58, "joshua.mcken...@gmail.com" < > joshua.mcken...@gmail.com> wrote: > > It would be incredibly helpful for us to have some empirical data and > agreed upon terms and benchmarks to help us navigate discussions like this: > > * How widely used is a feature in C* deployments worldwide? > * What are the primary issues users face when deploying them? > Scaling them? During failure scenarios? > * What does the engineering effort to bridge these gaps look like? > Who will do that? On what time horizon? > * What does our current test coverage for this feature look like? > * What shape of defects are arising with the feature? In a specific > subsection of the module or usage? > * Do we have an agreed upon set of standards for labeling a feature > stable? As experimental? If not, how do we get there? > * What effort will it take to bridge from where we are to where we > agree we need to be? On what timeline is this acceptable? > > I believe these are not only answerable questions, but fundamentally > the underlying themes our discussion alludes to. They’re also questions > that apply to a lot more than just MV’s and tie into what you’re speaking > to above Benedict. > > > > On Jun 30, 2020, at 8:32 PM, sankalp kohli > wrote: > > > > I see this discussion as several decisions which can be made in > small > > increments. > > > > 1. In release cycles, when can we propose a feature to be deprecated > or > > marked experimental. Ideally a new feature should come out > experimental if > > required but we have several who are candidates now. We can work on > > integrating this in the release lifecycle doc we already have. > > 2. What is the process of making an existing feature experimental? > How does > > it affect major releases around testing. > > 3. What is the process of deprecating/removing an experimental > feature. > > (Assuming experimental features should be deprecated/removed) > > > > Coming to MV, I think we need more data before we can say we > > should deprecate MV. Here are some of them which should be part of > > deprecation process > > 1.Talk to customers who use them and understand what is the impact. > Give > > them a forum to talk about it. > > 2. Do we have enough resources to bring this feature out of the > > experimental feature list in next 1 or 2 major releases. We cannot > have too > > many experimental features in the database. Marking a feature > experimental > > should not be a parking place
Re: [DISCUSS] Future of MVs
I think coming up with a formal comprehensive guide for determining if we can merge these sort of huge impacting features is a great idea. I'm also on board with applying the same standard to the experimental features. On Wed, Jul 1, 2020 at 1:45 PM Joshua McKenzie wrote: > Which questions and how we frame it aside, it's clear we have some > foundational thinking to do, articulate, and agree upon as a project before > we can reasonably make decisions about deprecation, promotion, or inclusion > of features in the project. > > Is that fair? > > If so, I propose we set this thread down for now in deference to us > articulating the quality bar we set and how we achieve it for features in > the DB and then retroactively apply them to existing experimental features. > Should we determine nobody is stepping up to maintain an > experimental feature in a reasonable time frame, we can cross the bridge of > the implications of scale of adoption and the perceived impact on the user > community of deprecation and removal at that time. > > On Wed, Jul 1, 2020 at 9:59 AM Benedict Elliott Smith > > wrote: > > > I humbly suggest these are the wrong questions to ask. Instead, two > sides > > of just one question matter: how did we miss these problems, and what > would > > we have needed to do procedurally to have not missed it. Whatever it is, > > we need to do it now to have confidence other things were not missed, as > > well as for all future features. > > > > We should start by producing a list of what we think is necessary for > > deploying successful features. We can then determine what items are > > missing that would have been needed to catch a problem. Obvious things > > are: > > > > * integration tests at scale > > * integration tests with a variety of extreme workloads > > * integration tests with various cluster topologies > > * data integrity tests as part of the above > > * all of the above as reproducible tests incorporated into the source > > tree > > > > We can then ensure Jira accurately represents all of the known issues > with > > MVs (and other features). This includes those that are poorly defined > > (such as "doesn't scale"). > > > > Then we can look at all issues and ask: would this approach have caught > > it, and if not what do we need to add to the guidelines to prevent a > > recurrence - and also ensure this problem is unique? In future we can > ask, > > for bugs found in features built to these guidelines: why didn't it catch > > this bug? Do the guidelines need additional items, or greater specificity > > about how to meet given criteria? > > > > I do not think that data from deployments - even if reliably obtained - > > can tell us much besides which problems we prioritise. > > > > > > > > On 01/07/2020, 01:58, "joshua.mcken...@gmail.com" < > > joshua.mcken...@gmail.com> wrote: > > > > It would be incredibly helpful for us to have some empirical data and > > agreed upon terms and benchmarks to help us navigate discussions like > this: > > > > * How widely used is a feature in C* deployments worldwide? > > * What are the primary issues users face when deploying them? > > Scaling them? During failure scenarios? > > * What does the engineering effort to bridge these gaps look like? > > Who will do that? On what time horizon? > > * What does our current test coverage for this feature look like? > > * What shape of defects are arising with the feature? In a specific > > subsection of the module or usage? > > * Do we have an agreed upon set of standards for labeling a feature > > stable? As experimental? If not, how do we get there? > > * What effort will it take to bridge from where we are to where we > > agree we need to be? On what timeline is this acceptable? > > > > I believe these are not only answerable questions, but fundamentally > > the underlying themes our discussion alludes to. They’re also questions > > that apply to a lot more than just MV’s and tie into what you’re speaking > > to above Benedict. > > > > > > > On Jun 30, 2020, at 8:32 PM, sankalp kohli > > > wrote: > > > > > > I see this discussion as several decisions which can be made in > > small > > > increments. > > > > > > 1. In release cycles, when can we propose a feature to be > deprecated > > or > > > marked experimental. Ideally a new feature should come out > > experimental if > > > required but we have several who are candidates now. We can work on > > > integrating this in the release lifecycle doc we already have. > > > 2. What is the process of making an existing feature experimental? > > How does > > > it affect major releases around testing. > > > 3. What is the process of deprecating/removing an experimental > > feature. > > > (Assuming experimental features should be deprecated/removed) > > > > > > Coming to MV, I think we need more data before we can say we > > > should depr
Re: [DISCUSS] Future of MVs
+1 On Wed, Jul 1, 2020 at 1:55 PM Jon Haddad wrote: > I think coming up with a formal comprehensive guide for determining if we > can merge these sort of huge impacting features is a great idea. > > I'm also on board with applying the same standard to the experimental > features. > > On Wed, Jul 1, 2020 at 1:45 PM Joshua McKenzie > wrote: > > > Which questions and how we frame it aside, it's clear we have some > > foundational thinking to do, articulate, and agree upon as a project > before > > we can reasonably make decisions about deprecation, promotion, or > inclusion > > of features in the project. > > > > Is that fair? > > > > If so, I propose we set this thread down for now in deference to us > > articulating the quality bar we set and how we achieve it for features in > > the DB and then retroactively apply them to existing experimental > features. > > Should we determine nobody is stepping up to maintain an > > experimental feature in a reasonable time frame, we can cross the bridge > of > > the implications of scale of adoption and the perceived impact on the > user > > community of deprecation and removal at that time. > > > > On Wed, Jul 1, 2020 at 9:59 AM Benedict Elliott Smith < > bened...@apache.org > > > > > wrote: > > > > > I humbly suggest these are the wrong questions to ask. Instead, two > > sides > > > of just one question matter: how did we miss these problems, and what > > would > > > we have needed to do procedurally to have not missed it. Whatever it > is, > > > we need to do it now to have confidence other things were not missed, > as > > > well as for all future features. > > > > > > We should start by producing a list of what we think is necessary for > > > deploying successful features. We can then determine what items are > > > missing that would have been needed to catch a problem. Obvious things > > > are: > > > > > > * integration tests at scale > > > * integration tests with a variety of extreme workloads > > > * integration tests with various cluster topologies > > > * data integrity tests as part of the above > > > * all of the above as reproducible tests incorporated into the source > > > tree > > > > > > We can then ensure Jira accurately represents all of the known issues > > with > > > MVs (and other features). This includes those that are poorly defined > > > (such as "doesn't scale"). > > > > > > Then we can look at all issues and ask: would this approach have caught > > > it, and if not what do we need to add to the guidelines to prevent a > > > recurrence - and also ensure this problem is unique? In future we can > > ask, > > > for bugs found in features built to these guidelines: why didn't it > catch > > > this bug? Do the guidelines need additional items, or greater > specificity > > > about how to meet given criteria? > > > > > > I do not think that data from deployments - even if reliably obtained - > > > can tell us much besides which problems we prioritise. > > > > > > > > > > > > On 01/07/2020, 01:58, "joshua.mcken...@gmail.com" < > > > joshua.mcken...@gmail.com> wrote: > > > > > > It would be incredibly helpful for us to have some empirical data > and > > > agreed upon terms and benchmarks to help us navigate discussions like > > this: > > > > > > * How widely used is a feature in C* deployments worldwide? > > > * What are the primary issues users face when deploying them? > > > Scaling them? During failure scenarios? > > > * What does the engineering effort to bridge these gaps look > like? > > > Who will do that? On what time horizon? > > > * What does our current test coverage for this feature look like? > > > * What shape of defects are arising with the feature? In a > specific > > > subsection of the module or usage? > > > * Do we have an agreed upon set of standards for labeling a > feature > > > stable? As experimental? If not, how do we get there? > > > * What effort will it take to bridge from where we are to where > we > > > agree we need to be? On what timeline is this acceptable? > > > > > > I believe these are not only answerable questions, but > fundamentally > > > the underlying themes our discussion alludes to. They’re also questions > > > that apply to a lot more than just MV’s and tie into what you’re > speaking > > > to above Benedict. > > > > > > > > > > On Jun 30, 2020, at 8:32 PM, sankalp kohli < > kohlisank...@gmail.com > > > > > > wrote: > > > > > > > > I see this discussion as several decisions which can be made in > > > small > > > > increments. > > > > > > > > 1. In release cycles, when can we propose a feature to be > > deprecated > > > or > > > > marked experimental. Ideally a new feature should come out > > > experimental if > > > > required but we have several who are candidates now. We can work > on > > > > integrating this in the release lifecycle doc we already have. > > > > 2. What is the process of making an
Re: [DISCUSS] Future of MVs
> > > > If so, I propose we set this thread down for now in deference to us > articulating the quality bar we set and how we achieve it for features in > the DB and then retroactively apply them to existing experimental features. > Should we determine nobody is stepping up to maintain an > experimental feature in a reasonable time frame, we can cross the bridge of > the implications of scale of adoption and the perceived impact on the user > community of deprecation and removal at that time. > We should make sure we back-haul this into the CEP process so new features/large changes have to provide some idea of what the gates are to be production ready.
Re: [DISCUSS] Future of MVs
Plays pretty cleanly into the "have a test plan" we modded in last month. +1 On Wed, Jul 1, 2020 at 6:43 PM Nate McCall wrote: > > > > > > > > If so, I propose we set this thread down for now in deference to us > > articulating the quality bar we set and how we achieve it for features in > > the DB and then retroactively apply them to existing experimental > features. > > Should we determine nobody is stepping up to maintain an > > experimental feature in a reasonable time frame, we can cross the bridge > of > > the implications of scale of adoption and the perceived impact on the > user > > community of deprecation and removal at that time. > > > > We should make sure we back-haul this into the CEP process so new > features/large changes have to provide some idea of what the gates are to > be production ready. >
Moving forward towards our best release yet
I've been in the Cassandra community for about 10 years now and I've seen a lot of ups and downs. I care deeply about both the project and the people interacting on the project personally. I consider many of you to be good friends. Regardless of the history that's caused some friction on recent discussion threads, I hope we can all see past the "us versus them" towards shipping something excellent and building momentum. I would just ask - please assume that everyone wants the project to succeed - to build the best, most stable, most scalable, most developer and operationally friendly database out there. Please know that while you personally may have seen X clusters in your work with Y nodes with Z challenges, you're in good company - we all have. Let's assume that everyone has a unique contribution based on battle scars and triumphs. As we listen to each other with this in mind, I think we can move forward more effectively. There are so many complementary efforts that can help make things more stable, reproduce issues and test for regressions now. As we get into the final stages of the 4.0 release cycle, I think we can bring all of this to bear for the best release we've ever had. We all have different viewpoints but please let's assume the best in others and communicate constructively. We all have things to contribute - large or small - and it's great to see renewed interest with new contributors. With all of the energy leading up to the release, I think we're seeing a glimpse of what we can do as a revitalized project and community and this is just the beginning. Thanks for all you do, Jeremy - To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org For additional commands, e-mail: dev-h...@cassandra.apache.org