> Is there any way to control the CPU load when using the "stress" benchmark?
> I have some control over that with our home-grown benchmark, but I
> thought it made sense to use the official benchmark tool as people might
> more readily believe those results and/or be able to reproduce them. But
>
Peter,
Thanks for your response. I'm looking into some of the ideas in your
other recent mail, but I had another followup question on this one...
Is there any way to control the CPU load when using the "stress" benchmark?
I have some control over that with our home-grown benchmark, but I
thought
> Thanks for your input. Can you tell me more about what we should be
> looking for in the gc log? We've already got the gc logging turned
> on and, and we've already done the plotting to show that in most
> cases the outliers are happening periodically (with a period of
> 10s of seconds to a fe
Peter,
Thanks for your input. Can you tell me more about what we should be
looking for in the gc log? We've already got the gc logging turned
on and, and we've already done the plotting to show that in most
cases the outliers are happening periodically (with a period of
10s of seconds to a fe
> I'm trying to understand if this is expected or not, and if there is
Without careful tuning, outliers around a couple of hundred ms are
definitely expected in general (not *necessarily*, depending on
workload) as a result of garbage collection pauses. The impact will be
worsened a bit if you are
Has anyone looked much at the maximum latency of cassandra read/write
requests? (rather than the average latency and average throughput)
We've been struggling for quite some time trying to figure out why we
we see occasional read or write response times in the 100s of milliseconds
even on fast m