So you have deleted the partition. Do not delete the sstables directly.
By default cassandra will keep the tombstones untouched for 10 days.
Once 10 days have passed (should be done now since your message was on
august 12) a compaction is needed to actually reclaim the data.
You could force a com
It may also be worth upgrading to Cassandra 3.11.4. There's some changes
in 3.6+ that significantly reduce heap pressure from very large partitions.
On Mon, Aug 12, 2019 at 9:13 AM Gabriel Giussi
wrote:
> I've found a huge partion (~9GB) in my cassandra cluster because I'm
> loosing 3 nodes rec
I've found a huge partion (~9GB) in my cassandra cluster because I'm
loosing 3 nodes recurrently due to OutOfMemoryError
> ERROR [SharedPool-Worker-12] 2019-08-12 11:07:45,735
> JVMStabilityInspector.java:140 - JVM state determined to be unstable.
> Exiting forcefully due to:
> java.lang.OutOfMemo