>do you run "nodetool repair" on both base and view regularly?
Yes, we run a full repair on our entire cluster every weekend which
includes the keyspaces with the base table and materialized views
But still, there are a ton of discrepancies in our base table and
materialized view.
Also, do you th
Cassandra 4.0-beta1 is now available on FreeBSD.
You can find information about the port here:
https://www.freshports.org/databases/cassandra4/
The beta can be installed from an up-to-date ports tree under
databases/cassandra4.
Best,
Angelo
Hi,
> We are facing data inconsistency issues between base tables and
materialized views.
do you run "nodetool repair" on both base and view regularly?
> What are all the possible scenarios that we should be watching out for in
a production environment?
more cpu/io/gc for populating views.
> C
Hi,
We are using Cassandra 3.0.13
We have the following datacenters:
- DC1 with 7 Cassandra nodes with RF:3
- DC2 with 2 Cassandra nodes with RF:2
- DC3 with 2 Cassandra nodes with RF:2
We are facing data inconsistency issues between base tables and
materialized views.
The only solution
Thanks. I am making a personal project so stopping K8S isn’t an issue for me
personally.
The reason I asked about stopping K8S is because I could run the `kubectl edit`
command on the cass-operator...yaml file but not on example..yaml file. As the
configuration is in the example...yaml file, I
Is it possible you don't have the k8s cluster running?
To answer your question, you can edit your k8s cluster configuration with
the new settings and the cass-operator will apply the changes then it will
perform a rolling restart of the pods for the changes to take effect.
Cheers!
>
Hi Team
I was wondering how Cassandra node is replaced if one of the worker node
fails in k8s. My understanding is that since PVCs are remounted to their
volume mounts, no matter where the pods are rescheduled (any node), so
replacing a node will not be a issue only ip will get changed.
Regards