Already submitted and fixed! Thanks Jonathan for your help on this. I really
appreciate it!
https://issues.apache.org/jira/browse/CASSANDRA-2158
On Mon, Feb 14, 2011 at 2:54 PM, Robert Coli wrote:
> Regarding very large memtables, it is important to recognize that
> throughput refers only to the size of the COLUMN VALUES, and not, for
> example, their names.
>
That would be a bug in it's own right. There are lots of use cases that
only
On Sat, Feb 12, 2011 at 11:17 PM, E S wrote:
> While experimenting with this, I found a bug where you can't have memtable
> throughput configured past 2 gigs without an integer overflow screwing up the
> flushes. That makes me feel like I'm in uncharted territory :).
I am sure the project would
I should note up front that the JVM simply does not handle heap sizes above
20G very well because the GC starts to become problematic.
Do you read rows in a uniformly random way? If not, caching is your best
bet for reducing read latencies. You should have enough space to cache all
of your keys,
I am trying to minimize my SSTable count to help cut down my read latency. I
have some very beefy boxes for my cassandra nodes (96 gigs of memory each). I
think this gives me a lot of flexibility to cut down SSTable count by having a
very large memtable throughput setting.
While experimenting