Fwiw, as much as I agree this is a change worth doing in general, I do am
-0 for 4.0. Both the "compact sequencing" and the change of default really.
We're closing on 2 months within the freeze, and for me a freeze do include
not changing defaults, because changing default ideally imply a decent
am
Agree with Sylvain (and I think Benedict) - there’s no compelling reason to
violate the freeze here. We’ve had the wrong default for years - add a note to
the docs that we’ll be changing it in the future, but let’s not violate the
freeze now.
--
Jeff Jirsa
> On Oct 19, 2018, at 10:06 AM, Syl
It reminds me of “shadow writes” described in [1].
During data migration the coordinator forwards a copy of any write request
regarding tokens that are being transferred to the new node.
[1] Incremental Elasticity for NoSQL Data Stores, SRDS’17,
https://ieeexplore.ieee.org/document/8069080
>
> Can you share the link to cwiki if you have started it ?
>
I haven't.
But I'll try to put together a strawman proposal for the doc(s) over the
weekend.
Regards,
Mick
-
To unsubscribe, e-mail: dev-unsubscr.
The change of default property doesn’t seem to violate the freeze? The
predominant phrased used in that thread was 'feature freeze'. A lot of people
are now interpreting it more broadly, so perhaps we need to revisit, but that’s
probably a separate discussion?
The current default is really ba
Hi,
I ran some benchmarks on my laptop
https://issues.apache.org/jira/browse/CASSANDRA-13241?focusedCommentId=16656821&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16656821
For a random read workload, varying chunk size:
Chunk size Time
64k 25:20
>
> The predominant phrased used in that thread was 'feature freeze'.
At the risk of hijacking this thread, when are we going to transition from
"no new features, change whatever else you want including refactoring and
changing years-old defaults" to "ok, we think we have something that's
stable,
new DC and then split is one way, but you have to wait for it to stream,
and then how do you know the dc coherence is good enough to switch the
targetted DC for local_quorum? And then once we split it we'd have downtime
to "change the name" and other work that would distinguish it from the
original
Shall we move this discussion to a separate thread? I agree it needs to be
had, but this will definitely derail this discussion.
To respond only to the relevant portion for this thread:
> changing years-old defaults
I don’t see how age is relevant? This isn’t some ‘battle hardened’ feature
w
On 10/19/18 9:16 AM, Joshua McKenzie wrote:
>
> At the risk of hijacking this thread, when are we going to transition from
> "no new features, change whatever else you want including refactoring and
> changing years-old defaults" to "ok, we think we have something that's
> stable, time to start te
Also we have 2.1.x and 2.2 clusters, so we can't use CDC since apparently
that is a 3.8 feature.
Virtual tables are very exciting so we could do some collating stuff (which
I'd LOVE to do with our scheduling application where we can split tasks
into near term/most frequent(hours to days), medium-t
(We should definitely harden the definition for freeze in a separate thread)
My thinking is that this is the best time to do this change as we have not even
cut alpha or beta. All the people involved in the test will definitely be
testing it again when we have these releases.
> On Oct 19, 2018
Thanks for the infos,
I will try aerospike also, they are always including a one node
installation on there benchmarking and are also talking about vertical
scalability.
Kind regards.
Le jeu. 18 oct. 2018 14:44, Aleksey Yeshchenko a écrit :
> I agree with Jeff here.
>
> Furthermore, Cassandra
I think we should try to do the right thing for the most people that we
can. The number of folks impacted by 64KB is huge. I've worked on a lot
of clusters created by a lot of different teams, going from brand new to
pretty damn knowledgeable. I can't think of a single time over the last 2
years
Sorry, to be clear - I'm +1 on changing the configuration default, but I
think changing the compression in memory representations warrants further
discussion and investigation before making a case for or against it yet.
An optimization that reduces in memory cost by over 50% sounds pretty good
and
Do you mean to say that during host replacement there may be a time when the
old->new host isn’t fully propagated and therefore wouldn’t yet be in all
system tables?
> On Oct 17, 2018, at 4:20 PM, sankalp kohli wrote:
>
> This is not the case during host replacement correct?
>
> On Tue, Oct 1
Say you restarted all instances in the cluster and status for some host goes
missing. Now when you start a host replacement, the new host won’t learn about
the host whose status is missing and the view of this host will be wrong.
PS: I will be happy to be proved wrong as I can also start using G
17 matches
Mail list logo