ase, my key cache hit
> rate
> > is about 20%. mainly because we do random read. We just going to leave
> the
> > index_interval as is for now.
> >
>
> That's pretty painful. If you can up that a bit, it'll probably help you
> out. You can adjust the i
going to leave the
> index_interval as is for now.
>
That's pretty painful. If you can up that a bit, it'll probably help you out.
You can adjust the index intervals, too, but I'd significantly increase key
cache si
First, a big thank to Jeff who spent endless time to help this mailing list.
Agreed that we should tune the key cache. In my case, my key cache hit rate
is about 20%. mainly because we do random read. We just going to leave the
index_interval as is for now.
On Mon, Jul 10, 2017 at 8:47 PM, Jeff
TML with ~500G load per node this
> change adds a modest ~25mb off-heap memory."
>
> I wonder if any one has experience on working with max and min index_interval
> to increase the read speed.
It's usually more efficient to try to tune the key cache, and hope you never
have to
ML pages we seem to get better read latencies
by lowering the sampling interval from 128 min / 2048 max to 64 min / 512
max. For large tables like parsoid HTML with ~500G load per node this
change adds a modest ~25mb off-heap memory."
I wonder if any one has experience on working with max and mi
From: Robert Coli
To: user@cassandra.apache.org
Sent: Monday, June 17, 2013 3:28 PM
Subject: Re: index_interval
On Mon, May 13, 2013 at 9:19 PM, Bryan Talbot wrote:
> Can the index sample storage be treated more like key cache or row cache
> whe
On Mon, May 13, 2013 at 9:19 PM, Bryan Talbot wrote:
> Can the index sample storage be treated more like key cache or row cache
> where the total space used can be limited to something less than all
> available system ram, and space is recycled using an LRU (or configurable)
> algorithm?
Treating
Maybe I should ask the question a different way.
Currently, if all index samples do not fit in the java heap the jvm will
eventually OOM and the process will crash. The proposed change sounds like
it will move the index samples to off-heap storage but that if that can't
hold all samples, the proc
So will cassandra provide a way to limit its off-heap usage to avoid
unexpected OOM kills? I'd much rather have performance degrade when 100%
of the index samples no longer fit in memory rather than the process being
killed with no way to stabilize it without adding hardware or removing data.
-Br
cation
> error and quit? If so, are there plans to make the off-heap usage more
> dynamic to allow less used pages to be replaced with "hot" data and the
> paged-out / "cold" data read back in again on demand?
>
> -Bryan
>
>
>
> On Wed, May 8, 2013 at 4:24 PM
paged-out / "cold" data read back in again on demand?
-Bryan
On Wed, May 8, 2013 at 4:24 PM, Jonathan Ellis wrote:
> index_interval won't be going away, but you won't need to change it as
> often in 2.0: https://issues.apache.org/jira/browse/CASSANDRA-5521
>
&g
index_interval won't be going away, but you won't need to change it as
often in 2.0: https://issues.apache.org/jira/browse/CASSANDRA-5521
On Mon, May 6, 2013 at 12:27 PM, Hiller, Dean wrote:
> I heard a rumor that index_interval is going away? What is the replacement
> for this
aland
@aaronmorton
http://www.thelastpickle.com
On 7/05/2013, at 5:27 AM, "Hiller, Dean" wrote:
> I heard a rumor that index_interval is going away? What is the replacement
> for this? (we have been having to play with this setting a lot lately as too
> big and it gets slow yet too
I heard a rumor that index_interval is going away? What is the replacement for
this? (we have been having to play with this setting a lot lately as too big
and it gets slow yet too small and cassandra uses way too much RAM…we are still
trying to find the right balance with this setting
s significantly reduced but the
*Index.db files are the same size size as before.
Any ideas why this would be the case?
Basically, Why is our disk size not reduced since RAM is way lower? We
are running strong now with 512 index_interval for past 2-3 days and RAM
never looked better. We wer
; I was just curious. Our RAM has significantly reduced but the
>>*Index.db files are the same size size as before.
>>
>> Any ideas why this would be the case?
>>
>> Basically, Why is our disk size not reduced since RAM is way lower? We
>>are running strong
sically, Why is our disk size not reduced since RAM is way lower? We are
running strong now with 512 index_interval for past 2-3 days and RAM never
looked better. We were pushing 10G before and now we are 2G slowing increasing
to 8G before gc compacts the long lived stuff which goes ba
Index.db file always contains *all* position of the keys in data file.
index_interval is the rate that the position of the key in index file is store
in memory.
So that C* can begin scanning index file from closest position.
On Friday, March 22, 2013 at 11:17 AM, Hiller, Dean wrote:
> I
I was just curious. Our RAM has significantly reduced but the *Index.db files
are the same size size as before.
Any ideas why this would be the case?
Basically, Why is our disk size not reduced since RAM is way lower? We are
running strong now with 512 index_interval for past 2-3 days and
so bloom filter fp default for 1.2.2 is 0.1 so my
>>bloomfilter size is 1.27G RAM(nodetool cfstats)1.7 billion rows each
>>node.
>>
>>My cfstats for this CF is attached(Since cut and paste screwed up the
>>formatting). During testing in QA, we were not sure if inde
Argh, now I think that row size has nothing to do with the ii-based
index size/efficiency (I was thinking about the need of reading
index_interval / 2 entries in average from index file before finding the
proper one, but it should not have nothing to do with row size) - forget
the question
don't have our new
nodes ready yet(we know we should be at 8G but we would have a dead
cluster if we did that).
On startup, the initial RAM is around 6-8G. Startup with
index_interval=512 resulted in a 2.5G-2.8G initial RAM and I have seen it
grow to 3.3G and back down to 2.8G. We just roll
;I am using LCS so bloom filter fp default for 1.2.2 is 0.1 so my
>bloomfilter size is 1.27G RAM(nodetool cfstats)1.7 billion rows each
>node.
>
>My cfstats for this CF is attached(Since cut and paste screwed up the
>formatting). During testing in QA, we were not sure if index_i
I am using LCS so bloom filter fp default for 1.2.2 is 0.1 so my
bloomfilter size is 1.27G RAM(nodetool cfstats)1.7 billion rows each
node.
My cfstats for this CF is attached(Since cut and paste screwed up the
formatting). During testing in QA, we were not sure if index_interval
change was
e initial RAM is around 6-8G. Startup with
>index_interval=512 resulted in a 2.5G-2.8G initial RAM and I have seen it
>grow to 3.3G and back down to 2.8G. We just rolled this out an hour ago.
>Our website response time is the same as before as well.
>
>We rolled to only 2 nodes(out o
with
index_interval=512 resulted in a 2.5G-2.8G initial RAM and I have seen it
grow to 3.3G and back down to 2.8G. We just rolled this out an hour ago.
Our website response time is the same as before as well.
We rolled to only 2 nodes(out of 6) in our cluster so far to test it out
and let it soak
It would be good to have index_interval configurable per keyspace.
Preferably in cassandra.yaml because i use it as tuning on nodes running
out of memory without affecting performance noticeably.
27 matches
Mail list logo