Yes, that's LIKELY "better".
On Mon, Dec 11, 2017 at 8:10 AM, Micha wrote:
> ok, thanks for the answer.
>
> So the better approach here is to adjust the table schema to get the
> partition size to around 100MB max.
> This means using a partition key with multiple parts and making more
> select
ok, thanks for the answer.
So the better approach here is to adjust the table schema to get the
partition size to around 100MB max.
This means using a partition key with multiple parts and making more
selects instead of one when querying the data (which may increase
parallelism).
Michael
---
There's a few, and there have been various proposals (some in progress) to
deal with them. The two most obvious problems are:
The primary problem for most people is that wide partitions cause JVM heap
pressure on reads (CASSANDRA-11206, CASSANDRA-9754). This is because we
break the wide partitions
Hi,
What are the effects of large partitions?
I have a few tables which have partitions sizes as:
95% 24000
98% 42000
99% 85000
Max 82000
So, should I redesign the schema to get this max smaller or doesn't it
matter much, since 99% of the partitions are <= 85000 ?
Thanks for answerin