FWIW, I’m not -0, just think that long after the freeze date a change like this
needs a strong mandate from the community. I think the change is a good one.
> On 17 Oct 2018, at 22:09, Ariel Weisberg wrote:
>
> Hi,
>
> It's really not appreciably slower compared to the decompression we ar
Hello,
I am wondering if using cassandra as one local database without the cluster
capabilities has a sens, (i cannot do multi node cluster due to a technical
constraint)
I have an application with a purpose to store a dynamic number of colones
on each rows (thing that i cannot do with classical
I have an application with a purpose to store a dynamic number of colones
on each rows (thing that i cannot do with classical relational database),
---> Postgresql allows you tu use array type or map type with dynamic
number of records, provided of course that the cardinality of the
collection is
I can’t think of a situation where I’d choose Cassandra as a database in a
single-host use case (if you’re sure it’ll never be more than one machine).
--
Jeff Jirsa
> On Oct 18, 2018, at 12:31 PM, Abdelkrim Fitouri wrote:
>
> Hello,
>
> I am wondering if using cassandra as one local databas
>
> ---> Postgresql allows you tu use array type or map type with dynamic
> number of records, provided of course that the cardinality of the
> collection is not "too" big
>
Thanks for these details, but what do you mean about the cardinality of the
collection is not too big ?
In my solution a c
I agree with Jeff here.
Furthermore, Cassandra should generally be your solution of last resort - if
nothing else works out.
In your case I’d try sqlite or leveldb (or rocksdb).
> On 18 Oct 2018, at 11:46, Jeff Jirsa wrote:
>
> I can’t think of a situation where I’d choose Cassandra as a data
Hello,
I wanted to use the driver with the included netty jars since the netty
version of debian stretch is too old.
But my program fails with NoClassDefFoundError: io/netty/util/NettyRuntime
The reason is that the driver tarball has netty 4.0.56 in the lib
directory but this version doesn't inc
tl;dr: a generic trigger on TABLES that will mirror all writes to
facilitate data migrations between clusters or systems. What is necessary
to ensure full write mirroring/coherency?
When cassandra clusters have several "apps" aka keyspaces serving
applications colocated on them, but the app/keyspa
I guess there is also write-survey-mode from cass 1.1:
https://issues.apache.org/jira/browse/CASSANDRA-3452
Were triggers intended to supersede this capability? I can't find a lot of
"user level" info on it.
On Thu, Oct 18, 2018 at 10:53 AM Carl Mueller
wrote:
> tl;dr: a generic trigger on TA
The write sampling is adding an extra instance with the same schema to test
things like yaml params or compaction without impacting reads or correctness -
it’s different than what you describe
--
Jeff Jirsa
> On Oct 18, 2018, at 5:57 PM, Carl Mueller
> wrote:
>
> I guess there is also wr
Thanks. Well, at a minimum I'll probably start writing something soon for
trigger-based write mirroring, and we will probably support kafka and
another cassandra cluster, so if those seem to work I will contribute
those.
On Thu, Oct 18, 2018 at 11:27 AM Jeff Jirsa wrote:
> The write sampling is
Isn't this what CDC was designed for?
https://issues.apache.org/jira/browse/CASSANDRA-8844
On Thu, Oct 18, 2018 at 10:54 AM Carl Mueller
wrote:
> tl;dr: a generic trigger on TABLES that will mirror all writes to
> facilitate data migrations between clusters or systems. What is necessary
> to en
Hi,
For those who were asking about the performance impact of block size on
compression I wrote a microbenchmark.
https://pastebin.com/RHDNLGdC
[java] Benchmark Mode Cnt
Score Error Units
[java] CompactIntegerSequenceB
Hi,
We are using a Cassandra to develop our application and we use a secondary
index in one of our table for faster performance. As of now in our production,
we saw a growing disk space on the table that has secondary index on it. It
becomes a problem on us since we have a lot of data need to s
Could be done with CDC
Could be done with triggers
(Could be done with vtables — double writes or double reads — if they were
extended to be user facing)
Would be very hard to generalize properly, especially handling failure cases
(write succeeds in one cluster/table but not the other) which are
I might be missing something but we’ve done this operation on a few
occasions by:
1) Commission the new cluster and join it to the existing cluster as a 2nd
DC
2) Replicate just the keyspace that you want to move to the 2nd DC
3) Make app changes to read moved tables from 2nd DC
4) Change keyspace
Trigger based has worked for us in the past to get once only output of what’s
happened - pushing this to Kafka and using Kafka Connect allowed to then direct
the stream to to other endpoints.
CDC based streaming has the issue of duplicates which are technically fine if
you don’t care that much
17 matches
Mail list logo