ding problem during peak load. I think it's fairly likely that your
> 7 nodes cluster (6 nodes in one DC) is not able to keep up with the peak
> load, and you will need to either scale up for the peak load or tune the
> application to avoid the bursty behaviour.
> On 27/07/2021 16:2
ngs.
>
> 16k read requests dropped in 5 seconds, or over 3k requests per second on
> a single node, is a bit suspicious. Does your read requests tend to be
> bursty?
> On 27/07/2021 15:32, Chahat Bhatia wrote:
>
> Yes, RF=6 for system auth. Sorry my bad.
>
>
> No, w
dra.yaml file? (
> read_request_timeout_in_ms, etc.) Are they reasonably long enough for the
> corresponding request type to complete?
>
> Since you've only got 7 nodes, I'd also recommend you to check the nodetool
> cfstats & nodetool cfhistograms output for the tables
e its been the same since.
On Tue, 27 Jul 2021 at 13:53, Chahat Bhatia
wrote:
> Thanks for the prompt response.
>
> *Here is the system_schema.keyspaces entry:*
>
> system_auth | True | {'class':
>> 'org.apache.cassandra.locator.Netwo
nsistency
> level to read the auth tables.
>
> Then, can you please make sure the replication strategy is set correctly
> for the system_auth namespace? I.e.: ensure the old DC is not present, and
> the new DC has sufficient number of replicas for fault tolerance.
>
> Fin
Hi Community,
Context: We are running a cluster of 6 nodes in production with a RF=3 in
AWS.
We recently moved from physical servers to cloud by adding a new DC and
then removing the old one. Everything is working fine in all the other
applications except this one.
*As we recently started experi