due to
> an initial miss-configuration when our application where first started and
> only used DC1 to create the keyspaces ad tables
>
> Steve
>
>
> From: Alain RODRIGUEZ
> Reply-To: "user@cassandra.apache.org"
> Date: Thursday, 14 April 2016 at 12:57
>
>
ndra.apache.org>>
Date: Thursday, 14 April 2016 at 12:57
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: Balancing tokens over 2 datacenter
100% ownership on all nodes isn’t wrong with 3 nodes in each o
g with 3 nodes in each of 2 Dcs with
> RF=3 in both of those Dcs. That’s exactly what you’d expect it to be, and a
> perfectly viable production config for many workloads.
>
>
>
> From: Anuj Wadehra
> Reply-To: "user@cassandra.apache.org"
> Date: Wednesday, April 1
to:user@cassandra.apache.org>>
Subject: Re: Balancing tokens over 2 datacenter
100% ownership on all nodes isn’t wrong with 3 nodes in each of 2 Dcs with RF=3
in both of those Dcs. That’s exactly what you’d expect it to be, and a
perfectly viable production config for many workloads.
, April 13, 2016 at 6:02 PM
To: "user@cassandra.apache.org"
Subject: Re: Balancing tokens over 2 datacenter
Hi Stephen Walsh,
As per the nodetool output, every node owns 100% of the range. This indicates
wrong configuration. It would be good, if you verify and share following
prope
point to that Cassandra DC’s.
From: Alain RODRIGUEZ
Reply-To: "user@cassandra.apache.org"
Date: Wednesday, 13 April 2016 at 15:52
To: "user@cassandra.apache.org"
Subject: Re: Balancing tokens over 2 datacenter
Steve,
This cluster looks just great.
Now, due to a miss
r@cassandra.apache.org>>
Date: Wednesday, 13 April 2016 at 15:52
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: Balancing tokens over 2 datacenter
Steve,
This cluster looks just great.
Now, due to a miss
00.0%
> aef904ba-aaab-47f1-9bdc-cc1e0c676f61 RAC4
>
>
> We ran the nodetool repair and cleanup in case the nodes where balanced
> but needed cleaning up – this was not the case :(
>
>
> Steve
>
>
> From: Alain RODRIGUEZ
> Reply-To: "user@cassandra.apache.or
From: Alain RODRIGUEZ mailto:arodr...@gmail.com>>
Reply-To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Date: Wednesday, 13 April 2016 at 14:48
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>&
Hi Steve,
> As such, all keyspaces and tables where created on DC1.
> The effect of this is that all reads are now going to DC1 and ignoring DC2
>
I think this is not exactly true. When tables are created, they are created
on a specific keyspace, no matter where you send the alter schema command
This could be because of the way you have configured the policy, have a
look at the below links for configuring the policy
https://datastax.github.io/python-driver/api/cassandra/policies.html
http://stackoverflow.com/questions/22813045/ability-to-write-to-a-particular-cassandra-node
Regards,
Bhu
Hi there,
So we have 2 datacenter with 3 nodes each.
Replication factor is 3 per DC (so each node has all data)
We have an application in each DC that writes that Cassandra DC.
Now, due to a miss configuration in our application, we saw that our
application in both DC’s where pointing to DC1.
12 matches
Mail list logo