Also, recently we have been observing a lot of repair logs like these.
>
> INFO [RepairJobTask:3] 2020-08-12 12:07:46,325 SyncTask.java:73 - [repair
> #aa-bbb-c-dd-] Endpoints /2.2.2.2 and /3.3.3.3 have 9146
> range(s) out of sync for table_a
>
>
Could this be somehow related
Also, we are observing a decrease in some Gb's load in our Cassandra
cluster, every time we restart a particular node in the cluster.
Could it be because of the stale data being removed?
If not what could be other possible reasons for that.
On Tue, Aug 18, 2020 at 7:26 AM Erick Ramirez
wrote:
>
I would start by checking the replication settings on all your keyspaces.
There's a chance that you have keyspaces not replicated to DC3. FWIW it
would have to be an application keyspace (vs system keyspaces) because of
the size. Cheers!
Hi,
We are using Cassandra 3.0.13
We have the following datacenters:
- DC1 with 7 Cassandra nodes with RF:3 (2 years old)
- DC2 with 1 Cassandra node with RF:1 (4 years old)
- DC3 with 2 Cassandra nodes with RF:2 (one-month-old)
On DC2 and DC3, each node has 100% data.
Seed nodes while