There have been a lot of discussions about GC tuning on the mail thread. Here's
a really quick set of guidelines I use, please search the mail archive if it
does not answer your question.
If heavy GC activity correlates with cassandra compaction, do one or more of:
* reduce concurrent_compactio
Hello,
Just to wrap up on my part of this thread, tuning CMS compaction threshold
(-XX:CMSInitiatingOccupancyFraction) to 70 appears to resolved my issues with
the memory warnings. However, I don't believe this would be a solution to all
the issues mentioned below. Although, it does make sens
We are facing similar issue, and we are not able to have the ring stable.
We are using C*1.2.3 on Centos6, 32GB - RAM, 8GB-heap, 6 Nodes.
The total data ~ 84gb (which is relatively small for C* to handle, with a
RF of 3). Our application is heavy read, we see the GC complaints in all
nodes, I cop
We are using DSE, which I believe is also 1.1.9. We have basically had a
non-usable cluster for months due to this error. In our case, once it starts
doing this it starts flushing sstables to disk and eventually fills up the disk
to the point where it can't compact. If we catch it soon enough
I would have said the exact opposite, but I am not really sure.
I have configured the threshold to 80% of the heap since I have 8GB heap. I
think the purpose of this threshold is to keep a security margin to avoid
OOMing. C* can be configured with 1GB heap so the margin is about 250MB. On
a 8GB he
The CMS compaction threshold is usually set to 75% as well, it might help
to set it lower to 70% to see if that resolves these warnings as Cassandra
will start CMS GC before it hits the 75% warning.
There is also a setting to lower the max amount of memory used for
compacting each row. This may ca
[mailto:mthero...@yahoo.com]
Sent: Friday, April 19, 2013 6:00 PM
To: user@cassandra.apache.org
Subject: Advice on memory warning
Hello,
We've recently upgraded from m1.large to m1.xlarge instances on AWS to handle
additional load, but to also relieve memory pressure. It appears to have
accompl
Hello,
We've recently upgraded from m1.large to m1.xlarge instances on AWS to handle
additional load, but to also relieve memory pressure. It appears to have
accomplished both, however, we are still getting a warning, 0-3 times a day, on
our database nodes:
WARN [ScheduledTasks:1] 2013-04-19