Yep, agreed this is definitely the best route forwards.
On 02/07/2020, 01:10, "Joshua McKenzie" wrote:
Plays pretty cleanly into the "have a test plan" we modded in last month. +1
On Wed, Jul 1, 2020 at 6:43 PM Nate McCall wrote:
> >
> >
> >
> > If so, I propose we se
Plays pretty cleanly into the "have a test plan" we modded in last month. +1
On Wed, Jul 1, 2020 at 6:43 PM Nate McCall wrote:
> >
> >
> >
> > If so, I propose we set this thread down for now in deference to us
> > articulating the quality bar we set and how we achieve it for features in
> > the
>
>
>
> If so, I propose we set this thread down for now in deference to us
> articulating the quality bar we set and how we achieve it for features in
> the DB and then retroactively apply them to existing experimental features.
> Should we determine nobody is stepping up to maintain an
> experime
+1
On Wed, Jul 1, 2020 at 1:55 PM Jon Haddad wrote:
> I think coming up with a formal comprehensive guide for determining if we
> can merge these sort of huge impacting features is a great idea.
>
> I'm also on board with applying the same standard to the experimental
> features.
>
> On Wed, Jul
I think coming up with a formal comprehensive guide for determining if we
can merge these sort of huge impacting features is a great idea.
I'm also on board with applying the same standard to the experimental
features.
On Wed, Jul 1, 2020 at 1:45 PM Joshua McKenzie wrote:
> Which questions and
Which questions and how we frame it aside, it's clear we have some
foundational thinking to do, articulate, and agree upon as a project before
we can reasonably make decisions about deprecation, promotion, or inclusion
of features in the project.
Is that fair?
If so, I propose we set this thread
I humbly suggest these are the wrong questions to ask. Instead, two sides of
just one question matter: how did we miss these problems, and what would we
have needed to do procedurally to have not missed it. Whatever it is, we need
to do it now to have confidence other things were not missed, a
> I agree with Jeff that there is some stuff to do to address the current MV
> issues and I am willing to focus on making them production ready.
+1
On Wed, 1 Jul 2020 at 15:42, Benjamin Lerer
wrote:
> >
> > "Make the scan faster"
> > "Make the scan incremental and automatic"
> > "Make it not bl
>
> "Make the scan faster"
> "Make the scan incremental and automatic"
> "Make it not blow up your page cache"
> "Make losing your base replicas less likely".
>
> There's a concrete, real opportunity with MVs to create integrity
> assertions we're missing. A dangling record from an MV that would po
It would be incredibly helpful for us to have some empirical data and agreed
upon terms and benchmarks to help us navigate discussions like this:
* How widely used is a feature in C* deployments worldwide?
* What are the primary issues users face when deploying them? Scaling them?
During fa
I see this discussion as several decisions which can be made in small
increments.
1. In release cycles, when can we propose a feature to be deprecated or
marked experimental. Ideally a new feature should come out experimental if
required but we have several who are candidates now. We can work on
i
> On Jun 30, 2020, at 4:52 PM, joshua.mcken...@gmail.com wrote:
>
> I followed up with the clarification about unit and dtests for that reason
> Dinesh. We test experimental features now.
I hit send before seeing your clarification. I personally feel that unit and
dtests may not surface regress
>>> Instead of ripping it out, we could instead disable them in the yaml
>>> with big fat warning comments around it.
FYI we have already disabled use of materialized views, SASI, and transient
replication by default in 4.0
https://github.com/apache/cassandra/blob/trunk/conf/cassandra.yaml#L13
I followed up with the clarification about unit and dtests for that reason
Dinesh. We test experimental features now.
If we’re talking about adding experimental features to the 40 quality testing
effort, how does that differ from just saying “we won’t release until we’ve
tested and stabilized t
> On Jun 30, 2020, at 4:05 PM, Brandon Williams wrote:
>
> Instead of ripping it out, we could instead disable them in the yaml
> with big fat warning comments around it. That way people already
> using them can just enable them again, but it will raise the bar for
> new users who ignore/miss th
On Tue, Jun 30, 2020 at 5:41 PM wrote:
> Given we’re at a place where things like MV’s and sasi are backing production
> cases (power users one would hope or smaller use cases) I don’t think ripping
> those features out and further excluding users from the ecosystem is the
> right move.
Instea
> On Jun 30, 2020, at 3:40 PM, joshua.mcken...@gmail.com wrote:
>
> I don’t think we should hold up releases on testing experimental features.
> Especially with how many of them we have.
>
> Given we’re at a place where things like MV’s and sasi are backing production
> cases (power users one w
Just to clarify one thing. I understand experimental features to be alpha /
beta quality, and as such the guarantees of correctness to differ from the
other features presented in the database. We should likely articulate this in
the wiki and docs if we have not.
In the case of mv’s, since they
I don’t think we should hold up releases on testing experimental features.
Especially with how many of them we have.
Agree re: needing a more quantitative bar for new additions which we can also
retroactively apply to experimental features to bring up to speed and
eventually graduate. Probably
> On Jun 30, 2020, at 3:27 PM, David Capwell wrote:
>
> If that is the case then shouldn't we add MV to "4.0 Quality: Components
> and Test Plans" (CASSANDRA-15536)? It is currently missing, so adding it
> to the testing road map would be a clear sign that someone is planning to
> champion and o
On Wed, Jul 1, 2020 at 10:27 AM David Capwell wrote:
> If that is the case then shouldn't we add MV to "4.0 Quality: Components
> and Test Plans" (CASSANDRA-15536)? It is currently missing, so adding it
> to the testing road map would be a clear sign that someone is planning to
> champion and ow
If that is the case then shouldn't we add MV to "4.0 Quality: Components
and Test Plans" (CASSANDRA-15536)? It is currently missing, so adding it
to the testing road map would be a clear sign that someone is planning to
champion and own this feature; if people feel that this is a broken
feature, s
I think the point is that we need to have a clear plan of action to bring
features up to an acceptable standard. That also implies a need to agree how
we determine if a feature has reached an acceptable standard - both going
forwards and retrospectively. For those that don't reach that standar
Let's forget I said anything about release cadence. That's another thread
entirely and a good deep conversation to explore. Don't want to derail.
If there's a question about "is anyone stepping forward to maintain MV's",
I can say with certainty that at least one full time contributor I work
with
I don't think we can realistically expect majors, with the deprecation cycle
they entail, to come every six months. If nothing else, we would have too many
versions to maintain at once. I personally think all the project needs on that
front is clearer roadmapping at the start of a release cycl
On Tue, Jun 30, 2020 at 1:46 PM Joshua McKenzie
wrote:
> We're just short of 98 tickets on the component since it's original merge
> so at least *some* work has been done to stabilize them. Not to say I'm
> endorsing running them at massive scale today without knowing what you're
> doing, to be c
I think, just as importantly, we also need to grapple with what went wrong when
features landed this way, since these were not isolated occurrences -
suggesting structural issues were at play.
I'm not sure if a retrospective is viable with this organisational structure,
but we can perhaps engag
Seems like a reasonable point of view to me Sankalp. I'd also suggest we
try to find other sources of data than just the user ML, like searching on
github for instance. A collection of imperfect metrics beats just one in my
experience.
Though I would ask why we're having this discussion this late
> So from my PoV, I'm against us just voting to deprecate and remove without
> going into more depth into the current state of things and what options are
> on the table, since people will continue to build MV's at the client level
> which, in theory, should have worse correctness and performance
Hi,
I think we should revisit all features which require a lot more work to
make them work. Here is how I think we should do for each one of them
1. Identify such features and some details of why they are deprecation
candidates.
2. Ask the dev list if anyone is willing to work on improving the
> While at TLP, I helped numerous customers move off of MVs, mostly because
> they affected stability of clusters in a horrific way. The most telling
> project involved helping someone create new tables to manage 1GB of data
> because the views performed so poorly they made the cluster unresponsiv
We're just short of 98 tickets on the component since it's original merge
so at least *some* work has been done to stabilize them. Not to say I'm
endorsing running them at massive scale today without knowing what you're
doing, to be clear. They are perhaps our largest loaded gun of a feature of
sel
+1
On Tue, Jun 30, 2020 at 2:44 PM Jon Haddad wrote:
>
> A couple days ago when writing a separate email I came across this DataStax
> blog post discussing MVs [1]. Imagine my surprise when I noticed the date
> was five years ago...
>
> While at TLP, I helped numerous customers move off of MVs,
> On Jun 30, 2020, at 12:43 PM, Jon Haddad wrote:
>
> As we move forward with the 4.0 release, we should consider this an
> opportunity to deprecate materialized views, and remove them in 5.0. We
> should take this opportunity to learn from the mistake and raise the bar
> for new features to und
+1 for deprecation and removal (assuming a credible plan to fix them doesn't
materialize)
> On Jun 30, 2020, at 12:43 PM, Jon Haddad wrote:
>
> A couple days ago when writing a separate email I came across this DataStax
> blog post discussing MVs [1]. Imagine my surprise when I noticed the dat
A couple days ago when writing a separate email I came across this DataStax
blog post discussing MVs [1]. Imagine my surprise when I noticed the date
was five years ago...
While at TLP, I helped numerous customers move off of MVs, mostly because
they affected stability of clusters in a horrific w
36 matches
Mail list logo