As per my understanding, Blocked requests do not get rejected immediately
and waits for some queued request to complete. And a request will timeout
only when the request creation time crosses configured request timeout.
You may try to tune the pool by increasing
max_queued_native_transtport_reques
Below info may be helpful for you :
1. In System.log logs (grep for below pattern)
RepairSession.java (line 244) [repair
#2e7009b0-c03d-11e4-9012-99a64119c9d8] new session:
RepairSession.java (line 282) [repair
#2e7009b0-c03d-11e4-9012-99a64119c9d8] session completed successfully
2. In table you
We are trying to detect a scenario where some of our smaller clusters go
un-repaired for extended periods of times mostly due to defects in
deployment pipelines or human errors.
We would like to automate a check for clusters where nodes that go
un-repaired for more than 7 days, to shoot out an exc
Hi,
I'm using Cassandra 2.2.8 with default NTR queque configurations (
max_queued_native_transtport_requests = 128, native_transport_max_threads =
128), and from the metrics I'm seeing some native transport requests are
being blocked.
I'm trying to understand what happens to the blocked native tr
Hello all,
I’ve been doing more analysis and I’ve few questions:
1. We observed that most of the requests are blocked on NTR queue. I
increased the queue size from 128 (default) to 1024 and this time the system
does recover automatically (latencies go back to normal) without removing node
Hi everyone,
I was finally able to sort out my problem in an "interesting" manner that I
think is worth sharing on the list!
What I did is the following: on each node, I stopped Cassandra, completely
dropped the data files of the column family, started Cassandra again and
issued a repair for this