Hi Chris,
cfstats shows large partitions.
Percentile SSTables Write Latency Read LatencyPartition Size
Cell Count
(micros) (micros) (bytes)
50% 5.00 60.00770.00 924
On Fri, Dec 4, 2015 at 11:44 AM, Anuj Wadehra
wrote:
> Did u say "longer than gc_grace_seconds" ?
> Wont deletes pop back during repair?
>
Unfortunately, you are correct. Since CASSANDRA-4905 [1], such tombstones
will not be propagated.
The actual way to fully repair a node that has been down g
May just be going over a lot of data. Does output of 'nodetool cfstats'
show large partitions? (partition maximum bytes). "collecting 1 of 2147483647"
is suspicious. Are your queries using ALLOW FILTERING or have very high
limits? If trying to read 2 billion entries in 1 query you will have memory
Hi Robert,
Did u say "longer than gc_grace_seconds" ?
Wont deletes pop back during repair?
Thanks
Anuj
Sent from Yahoo Mail on Android
From:"Robert Coli"
Date:Thu, 3 Dec, 2015 at 12:21 am
Subject:Re: Want to run repair on a node without it taking traffic
On Wed, Dec 2, 2015 at 8:54 AM, K
Hi Guys !!
I need comments on my understanding of repair -pr ..If you are using repair -pr
in your cluster then following statements hold true:
1. If a node goes down for long time and your not sure when will it return, you
must ensure that subrange repair for the defected node range is done
We are running 2.1.x and are currently looking into changing from STCS to
LCS, as well as enabling incremental repairs.
In what order should we do that? Should we enable incremental repairs
first, let it run its course which would mark a lot of tables as repaired,
and those marks would then carry
Thanks for the elaboration. A few more questions...
Is there only a single thread in each client or are there multiple threads
doing reading in parallel? IOW, does a read need to complete before the
next read is issued.
What client Cassandra driver are you using? Java?
What does your connection
Dear All,
Recently one of node in our cluster has high cpu load ~100%. It seems to me
there is a infinite loop in SliceQueryFilter.
The below log is repeated in 5000ms (range_request_timeout_in_ms).
TRACE [SharedPool-Worker-11] 2015-12-04 19:25:33,418 SliceQueryFilter.java:269
- collecting 1 of
Thanks for your input, but I think I’ve already answered most of your questions.
How many clients do you have performing reads?
--
On Wed, Dec 2, 2015 at 6:44 PM, Walsh, Stephen
mailto:stephen.wa...@aspect.com>> wrote
….
There are 2 application (1 for each DC) who read and write
2.0.16
I could argue that either way is correct. It’s just disconcerting that the
behavior changed. I spent some time and found and fixed everywhere in my code
where this change could be a problem, and I fixed it in such a way that if
works for both behaviors. I’d hate for this to come back to
If you change stream throughput it won't affect currently running streams
but it should affect new ones.
all the best,
Sebastián
On Dec 4, 2015 5:39 AM, "Jonathan Ballet" wrote:
> Thanks for your answer Rob,
>
> On 12/03/2015 08:32 PM, Robert Coli wrote:
>
>> On Thu, Dec 3, 2015 at 7:51 AM, Jon
Thanks for your answer Rob,
On 12/03/2015 08:32 PM, Robert Coli wrote:
On Thu, Dec 3, 2015 at 7:51 AM, Jonathan Ballet mailto:jbal...@edgelab.ch>> wrote:
I noticed it's not really fast and my monitoring system shows that
the traffic incoming on this node is exactly at 100Mb/s (12.6MB/s)
Hey Bryan,
I haven't change this setting, but it looks like this is the same
setting that can be changed with "nodetool setstreamthroughput"?
It sounds pretty interesting at a first glance, but FWIW, the limit was
12.6 MB/s, not 25 MB/s (so effectively 100 Mb/s).
On 12/03/2015 11:40 PM, Bryan
13 matches
Mail list logo