DuyHai Doan,
For wide rows, would it be better to switch to LeveledCompactionStrategy.
The number of SSTables will decrease and it's also optimized for reading data.
I have read in quite a few places that LeveledCompactionStrategy is better for
wide rows.
Is it true and would you recommend it?
If the table is fragmented on many sstables on disk, you may run into
trouble.
Let me explain the reason. Your query is perfectly fine, but if you're
querying a partition of, let's say 1 millions of rows spread across 10
SSTables, Cassandra may need to read the partition splits in all those
SSTabl
DuyHai Doan gmail.com> writes:
>
>
> "I do use rows that can span few thousands to a few million, I'm not sure if
rangeslices happen using the memory."
> What are your query patterns ? For a given partition, take a slice of xxx
columns ? Or give me a range of partitions ?
>
> For the 1st sc
"I do use rows that can span few thousands to a few million, I'm not sure
if range
slices happen using the memory."
What are your query patterns ? For a given partition, take a slice of xxx
columns ? Or give me a range of partitions ?
For the 1st scenario, depending on how many columns you want
I'm running a 4 node cluster with RF=3, CL of QUORUM for writes and ONE for
reads. Each node has 3.7GB RAM with 32GB SSD HD, commitlog is on
another HD. Currently each node has about 12GB of data. Cluster is always
normal unless repair happens, that's when some nodes go to medium health in
term