Hi Dan,
You're welcome, but I must admit you solved it on your own as I was about
to advice you reducing all the JVM stuff, the exact contrary to the working
solution you found :-). As 48 GB is a lot (I would have say something like
26 GB heap, and memtables about 4GB or something like that) to tr
Quick follow-up here, so far I've had these nodes stable for about 2 days
now with the following (still mysterious) solution: *increase*
memtable_heap_space_in_mb
to 20GB. This was having issues at the default value of 1/4 heap (12GB in
my case, I misspoke earlier and said 16GB). Upping it to 20GB
Hi thanks for responding Alain. Going to provide more info inline.
However a small update that is probably relevant: while the node was in
this state (MemtableReclaimMemory building up), since this cluster is not
serving live traffic I temporarily turned off ALL client traffic, and the
node still
Hi Dan,
I'll try to go through all the elements:
seeing this odd behavior happen, seemingly to single nodes at a time
Is that one node at the time or always on the same node. Do you consider
your data model if fairly, evenly distributed ?
The node starts to take more and more memory (instance
Also should note: Cassandra 2.2.5, Centos 6.7
On Wed, Mar 2, 2016 at 1:34 PM, Dan Kinder wrote:
> Hi y'all,
>
> I am writing to a cluster fairly fast and seeing this odd behavior happen,
> seemingly to single nodes at a time. The node starts to take more and more
> memory (instance has 48GB memo
Hi y'all,
I am writing to a cluster fairly fast and seeing this odd behavior happen,
seemingly to single nodes at a time. The node starts to take more and more
memory (instance has 48GB memory on G1GC). tpstats shows that
MemtableReclaimMemory Pending starts to grow first, then later
MutationStage