Hi all.
I’m worried about the amount of disk I use, so I’m more curious about
compression. We are currently using 3.11.0 and use default LZ4 Compressor
('chunk_length_in_kb': 64).
Is there a setting that can make more powerful compression?
Because most of them are time series data with TTL, we u
Hi All,
We have built Cassandra on AWS EC2 instances. Initially when creating
cluster we have not considered multi-region deployment and we have used AWS
EC2Snitch.
We have used EBS Volumes to save our data and each of those disks were
filled around 350G.
We want to extend it to Multi Region and
Hello everyone,
This is the solution :
@Autowired
CassandraOperations cassandraOperationsInstance;
...
...
Pageable request = CassandraPageRequest.first(1000);
Slice slice = null;
Query query = Query.empty().pageRequest(request);
do {
slice = cassandraOperationsInstance.
Great post, Jonathan! Thank you very much.
~Eric
On Wed, Aug 8, 2018 at 2:34 PM Jonathan Haddad wrote:
> Hey folks,
>
> We've noticed a lot over the years that people create tables usually
> leaving the default compression parameters, and have spent a lot of time
> helping teams figure out the
Hey folks,
We've noticed a lot over the years that people create tables usually
leaving the default compression parameters, and have spent a lot of time
helping teams figure out the right settings for their cluster based on
their workload. I finally managed to write some thoughts down along with
Hi Jeff/Jon et al, here is what I'm thinking to do to clean up, please lmk
what you think.
This is precisely my problem I believe:
http://thelastpickle.com/blog/2017/12/14/should-you-use-incremental-repair.html
With this I have a lot of wasted space due to a bad incremental repair. So
I am think
Thank you for explaining, Alain!
Predetermining the nodes to query, then sending 'data' request to one of them
and 'digest' request to another (for CL=QUORUM, RF=3) indeed explains more
effective use of filesystem cache when dynamic snitching is disabled.
So, there will be replica / replicas