You're welcome Schubert.
I look forward to any new results you may come up with.
{ It would also be interesting, when you run your tests again, to look at
the GC logs and see to what extent
https://issues.apache.org/jira/browse/CASSANDRA-896 is the culprit for what
you will see. Identifying any ot
Since the scale of GC graph in the slides is different from the throughput
ones. I will do another test for this issue.
Thanks for your advices, Masood and Jonathan.
---
Here, i just post my cossandra.in.sh.
JVM_OPTS=" \
-ea \
-Xms128M \
-Xmx6G \
-XX:Tar
Minimizing GC pauses or minimizing time slots allocated to GC pauses --
either through configuration or re-implementations of garbage collection
"bottlenecks" (i.e. object-generation "bottlenecks") -- seem to be the
immediate approach. (Other approaches appear to be more intrusive.)
At code level,
It's hard to tell from those slides, but it looks like the slowdown
doesn't hit until after several GCs.
Perhaps this is compaction kicking in, not GCs? Definitely the extra
I/O + CPU load from compaction will cause a drop in throughput.
On Mon, Apr 19, 2010 at 6:14 AM, Masood Mortazavi
wrote:
We see this behavior as well with 0.6, heap usage graphs look almost identical.
The GC is a noticeable bottleneck, we've tried jdku19 and jrockit vm's. It
basically kills any kind of soft real time behavior.
From: Masood Mortazavi [mailto:masoodmortaz...@gmail.com]
Sent: Monday, April 19, 2010 4
I'm seeing some issues like this as well, in fact, I think seeing your graphs
has helped me understand the dynamics of my cluster better.
Using some ballpark figures for inserting single column objects of ~500 bytes
onto individual nodes(not when combined as a cluster):
Node1: Inserts 12000/s
N