Thanks a lot for starting this thread Dinesh.
As a baseline expectation, we thought big users of Cassandra should be
> running the latest trunk internally and testing it out for their particular
> use cases. We wanted them to file as many jiras as possible based on their
> experience. Operations s
>
> It would be nice to have a graph on our weekly status of the number of
> issues reported on 4.0. I feel like having a visual representation of the
> number of bugs on 4.0 over time would be really helpful to give us a
> feeling of the progress on its stability.
>
Berenguer pointed out to me th
>
>
> Berenguer pointed out to me that we already have a graph to track those
> things:
>
>
> https://issues.apache.org/jira/secure/ConfigureReport.jspa?projectOrFilterId=filter-12347782&periodName=weekly&daysprevious=30&cumulative=true&versionLabels=none&selectedProjectId=12310865&reportKey=com.at
That is a good finger in the air starting point imo. We'd have to adjust
the backing filter to reflect exactly what we want. But we have the data
and a graph report available already at hand which is good :-)
On 30/6/20 11:09, Benjamin Lerer wrote:
>> It would be nice to have a graph on our weekly
That's a very good point. At the risk of saying sthg silly or being
captain obvious, as I am not familiar with the project dynamics, there
should be a periodic 'backlog triage' or similar. Otherwise we'll have
the impression we have just a handful of pending issues while another
10x packet is hidin
It is a good catch, Mick. :-)
I will triage those tickets to be sure that our view of things is accurate.
On Tue, Jun 30, 2020 at 11:38 AM Berenguer Blasi
wrote:
> That's a very good point. At the risk of saying sthg silly or being
> captain obvious, as I am not familiar with the project dynam
A couple days ago when writing a separate email I came across this DataStax
blog post discussing MVs [1]. Imagine my surprise when I noticed the date
was five years ago...
While at TLP, I helped numerous customers move off of MVs, mostly because
they affected stability of clusters in a horrific w
+1 for deprecation and removal (assuming a credible plan to fix them doesn't
materialize)
> On Jun 30, 2020, at 12:43 PM, Jon Haddad wrote:
>
> A couple days ago when writing a separate email I came across this DataStax
> blog post discussing MVs [1]. Imagine my surprise when I noticed the dat
> On Jun 30, 2020, at 12:43 PM, Jon Haddad wrote:
>
> As we move forward with the 4.0 release, we should consider this an
> opportunity to deprecate materialized views, and remove them in 5.0. We
> should take this opportunity to learn from the mistake and raise the bar
> for new features to und
+1
On Tue, Jun 30, 2020 at 2:44 PM Jon Haddad wrote:
>
> A couple days ago when writing a separate email I came across this DataStax
> blog post discussing MVs [1]. Imagine my surprise when I noticed the date
> was five years ago...
>
> While at TLP, I helped numerous customers move off of MVs,
We're just short of 98 tickets on the component since it's original merge
so at least *some* work has been done to stabilize them. Not to say I'm
endorsing running them at massive scale today without knowing what you're
doing, to be clear. They are perhaps our largest loaded gun of a feature of
sel
> While at TLP, I helped numerous customers move off of MVs, mostly because
> they affected stability of clusters in a horrific way. The most telling
> project involved helping someone create new tables to manage 1GB of data
> because the views performed so poorly they made the cluster unresponsiv
Hi,
I think we should revisit all features which require a lot more work to
make them work. Here is how I think we should do for each one of them
1. Identify such features and some details of why they are deprecation
candidates.
2. Ask the dev list if anyone is willing to work on improving the
> So from my PoV, I'm against us just voting to deprecate and remove without
> going into more depth into the current state of things and what options are
> on the table, since people will continue to build MV's at the client level
> which, in theory, should have worse correctness and performance
Seems like a reasonable point of view to me Sankalp. I'd also suggest we
try to find other sources of data than just the user ML, like searching on
github for instance. A collection of imperfect metrics beats just one in my
experience.
Though I would ask why we're having this discussion this late
I think, just as importantly, we also need to grapple with what went wrong when
features landed this way, since these were not isolated occurrences -
suggesting structural issues were at play.
I'm not sure if a retrospective is viable with this organisational structure,
but we can perhaps engag
On Tue, Jun 30, 2020 at 1:46 PM Joshua McKenzie
wrote:
> We're just short of 98 tickets on the component since it's original merge
> so at least *some* work has been done to stabilize them. Not to say I'm
> endorsing running them at massive scale today without knowing what you're
> doing, to be c
I don't think we can realistically expect majors, with the deprecation cycle
they entail, to come every six months. If nothing else, we would have too many
versions to maintain at once. I personally think all the project needs on that
front is clearer roadmapping at the start of a release cycl
Let's forget I said anything about release cadence. That's another thread
entirely and a good deep conversation to explore. Don't want to derail.
If there's a question about "is anyone stepping forward to maintain MV's",
I can say with certainty that at least one full time contributor I work
with
I think the point is that we need to have a clear plan of action to bring
features up to an acceptable standard. That also implies a need to agree how
we determine if a feature has reached an acceptable standard - both going
forwards and retrospectively. For those that don't reach that standar
If that is the case then shouldn't we add MV to "4.0 Quality: Components
and Test Plans" (CASSANDRA-15536)? It is currently missing, so adding it
to the testing road map would be a clear sign that someone is planning to
champion and own this feature; if people feel that this is a broken
feature, s
On Wed, Jul 1, 2020 at 10:27 AM David Capwell wrote:
> If that is the case then shouldn't we add MV to "4.0 Quality: Components
> and Test Plans" (CASSANDRA-15536)? It is currently missing, so adding it
> to the testing road map would be a clear sign that someone is planning to
> champion and ow
> On Jun 30, 2020, at 3:27 PM, David Capwell wrote:
>
> If that is the case then shouldn't we add MV to "4.0 Quality: Components
> and Test Plans" (CASSANDRA-15536)? It is currently missing, so adding it
> to the testing road map would be a clear sign that someone is planning to
> champion and o
I don’t think we should hold up releases on testing experimental features.
Especially with how many of them we have.
Agree re: needing a more quantitative bar for new additions which we can also
retroactively apply to experimental features to bring up to speed and
eventually graduate. Probably
Just to clarify one thing. I understand experimental features to be alpha /
beta quality, and as such the guarantees of correctness to differ from the
other features presented in the database. We should likely articulate this in
the wiki and docs if we have not.
In the case of mv’s, since they
> On Jun 30, 2020, at 3:40 PM, joshua.mcken...@gmail.com wrote:
>
> I don’t think we should hold up releases on testing experimental features.
> Especially with how many of them we have.
>
> Given we’re at a place where things like MV’s and sasi are backing production
> cases (power users one w
On Tue, Jun 30, 2020 at 5:41 PM wrote:
> Given we’re at a place where things like MV’s and sasi are backing production
> cases (power users one would hope or smaller use cases) I don’t think ripping
> those features out and further excluding users from the ecosystem is the
> right move.
Instea
> On Jun 30, 2020, at 4:05 PM, Brandon Williams wrote:
>
> Instead of ripping it out, we could instead disable them in the yaml
> with big fat warning comments around it. That way people already
> using them can just enable them again, but it will raise the bar for
> new users who ignore/miss th
Thank you all those who responded.
One potential way we could speed up sussing out issues is running regular "Bug
Bashes" with the help of the user community. We could periodically post stats
and recognize folks who contribute the most issues. This would help gain
confidence in the builds we're
I followed up with the clarification about unit and dtests for that reason
Dinesh. We test experimental features now.
If we’re talking about adding experimental features to the 40 quality testing
effort, how does that differ from just saying “we won’t release until we’ve
tested and stabilized t
>>> Instead of ripping it out, we could instead disable them in the yaml
>>> with big fat warning comments around it.
FYI we have already disabled use of materialized views, SASI, and transient
replication by default in 4.0
https://github.com/apache/cassandra/blob/trunk/conf/cassandra.yaml#L13
> On Jun 30, 2020, at 4:52 PM, joshua.mcken...@gmail.com wrote:
>
> I followed up with the clarification about unit and dtests for that reason
> Dinesh. We test experimental features now.
I hit send before seeing your clarification. I personally feel that unit and
dtests may not surface regress
I see this discussion as several decisions which can be made in small
increments.
1. In release cycles, when can we propose a feature to be deprecated or
marked experimental. Ideally a new feature should come out experimental if
required but we have several who are candidates now. We can work on
i
It would be incredibly helpful for us to have some empirical data and agreed
upon terms and benchmarks to help us navigate discussions like this:
* How widely used is a feature in C* deployments worldwide?
* What are the primary issues users face when deploying them? Scaling them?
During fa
34 matches
Mail list logo