> node to use for all future schema changes. Don't do the failover
> automatically as this can lead to racing conditions when the node is
> flapping up and down.
>
>
>
> On 25/01/2022 15:02, Amandeep Srivastava wrote:
>
> Hi,
>
> I tried looking for the Id
e, we have one Id mapped in the schema table and another Id that
is present in /data/ks of all 18 nodes. Is there a way to make schema table
Id point to correct one? (Would repair help here since call nodes have same
data?)
Regards,
Aman
On Tue, 25 Jan, 2022, 7:39 pm Amandeep Sriva
any of them is
> empty, you should be able to shutdown the node with the empty table,
> delete the table's data folder on that node and then restart the node,
> repeat this for all nodes with the empty table, and finally run a full
> repair.
>
>
> On 25/01/2022
Hi,
We're running an embedded Janus graph on top of Cassandra. On starting the
graph, it creates certain tables in cassandra for its operation. We noticed
that there is an Id mismatch for one of the tables named
system_properties_lock i.e.
Id fetched from schema table of cass (SELECT keyspace_nam
high performance on a big table, then go default one, and
> increase capacity in memory, nowadays hardware is cheaper.
>
> Thanks,
> Jim
>
> On Mon, Aug 2, 2021 at 7:12 PM Amandeep Srivastava <
> amandeep.srivastava1...@gmail.com> wrote:
>
>> Can anyone please help with
manual/configuration driven way to clear that earlier?
Thanks,
Aman
On Thu, 29 Jul, 2021, 6:47 pm Amandeep Srivastava, <
amandeep.srivastava1...@gmail.com> wrote:
> Hi Erick,
>
> Limiting mmap to index only seems to have resolved the issue. The max ram
> usage remained at 60% th
org/jira/browse/CASSANDRA-8464>)
Also if you could please shed light on extended questions in my earlier
email.
Thanks a lot.
Regards,
Aman
On Thu, Jul 29, 2021 at 12:52 PM Amandeep Srivastava <
amandeep.srivastava1...@gmail.com> wrote:
> Thanks, Bowen, don't think that's
Thanks, Bowen, don't think that's an issue - but yes I can try upgrading to
3.11.5 and limit the merkle tree size to bring down the memory utilization.
Thanks, Erick, let me try that.
Can someone please share documentation relating to internal functioning of
full repairs - if there exists one? Wa
Hi team,
My Cluster configs: DC1 - 9 nodes, DC2 - 4 nodes
Node configs: 12 core x 96GB ram x 1 TB HDD
Repair params: -full -pr -local
Cassandra version: 3.11.4
I'm running a full repair on DC2 nodes - one node and one keyspace at a
time. During the repair, ram usage on all 4 nodes spike up to 95%