Hi,
yes, the heap size is set to 2GB on all nodes. Without any activity, the heap
usage is less than 1GB. Does this include the bloom filters?
From the logs I can see that at the beginning of the test the GC is able to
free enough memory to push the heap usage to 1GB or less. However, then a lo
(You might be helped by
http://wiki.apache.org/cassandra/LargeDataSetConsiderations btw - it's
not entirely up to date by now... I will re-try remembering to update
it.)
--
/ Peter Schuller (@scode, http://worldmodscode.wordpress.com)
Ah, you have two CF:s. And my mistake was that I accidentally treated
bits as bytes ;)
My calc is that the bloom filter sizes per node for you should be
about 1.8-1.9 GB. If you haven't touched heap size, IIRC the default
is still going to be 2GB for your 4 GB machine (not sure, please
confirm if
> Compacted row maximum size: 36904729268
So 36 gigs. As long as you're sure each column is only about 1k, the
total row size should not be a problem.
> While I don't see OOMs when I use only a single thread to page the row, there
> are lots of ParNew collections that take about 5
Hi,
I have a 15-node cluster where each node has 4GB RAM and 80GB disk. There are
three CFs, of which only two contain data. In total, each CF contains about 2
billion columns. I have a replication factor of 2. All CFs are compressed with
SnappyCompressor. This is on Cassandra 1.0.2.
I was run