> How much smaller did the BF get to ?
After pending compactions completed today, i'm presuming fp_ratio is
applied now to all sstables in the keyspace, it has gone from 20G+ down
to 1G. This node is now running comfortably on Xmx4G (used heap ~1.5G).
~mck
--
"A Microsoft Certified System
Thanks for the update.
How much smaller did the BF get to ?
A
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 13/03/2012, at 8:24 AM, Mick Semb Wever wrote:
>
> It's my understanding then for this use case that bloom filters are of
> l
> > > > It's my understanding then for this use case that bloom filters are of
> > > > little importance and that i can
Ok. To summarise our actions to get us out of this situation, in hope
that it may help others one day, we did the following actions:
1) upgrade to 1.0.7
2) set fp_ratio=0.99
>>> It's my understanding then for this use case that bloom filters are of
>>> little importance and that i can
>>
Yes.
AFAIK there is only one position seek (that will use the bloom filter) at the
start of a get_range_slice request. After that the iterators step over the rows
in the -Data file
On Sun, 2012-03-11 at 15:36 -0700, Peter Schuller wrote:
> Are you doing RF=1?
That is correct. So are you calculations then :-)
> > very small, <1k. Data from this cf is only read via hadoop jobs in batch
> > reads of 16k rows at a time.
> [snip]
> > It's my understanding then for this use cas
> This particular cf has up to ~10 billion rows over 3 nodes. Each row is
With default settings, 143 million keys roughly gives you 2^31 bits of
bloom filter. Or put another way, you get about 1 GB of bloom filters
per 570 million keys, if I'm not mistaken. If you have 10 billion
rows, that should
On Sun, 2012-03-11 at 15:06 -0700, Peter Schuller wrote:
> If it is legitimate use of memory, you *may*, depending on your
> workload, want to adjust target bloom filter false positive rates:
>
>https://issues.apache.org/jira/browse/CASSANDRA-3497
This particular cf has up to ~10 billion row
> How did this this bloom filter get too big?
Bloom filters grow with the amount of row keys you have. It is natural
that they grow bigger over time. The question is whether there is
something "wrong" with this node (for example, lots of sstables and
disk space used due to compaction not running,
Using cassandra-1.0.6 one node fails to start.
java.lang.OutOfMemoryError: Java heap space
at org.apache.cassandra.utils.obs.OpenBitSet.(OpenBitSet.java:104)
at org.apache.cassandra.utils.obs.OpenBitSet.(OpenBitSet.java:92)
at
org.apache.cassandra.utils.BloomFilterSerializ