On Fri, Nov 29, 2013 at 6:36 PM, Anthony Grasso wrote:
> In this case would it be possible to do the following to replace a seed
> node?
>
With the quoted procedure, you are essentially just "changing the ip
address of a node", which will work as long as you set auto_bootstrap:false
in cassandra.
Hi Robert,
In this case would it be possible to do the following to replace a seed
node?
nodetool disablethrift
nodetool disablegossip
nodetool drain
stop Cassandra
deep copy /var/lib/cassandra/* on old seed node to new seed node
start Cassandra on new seed node
Regards,
Anthony
On Wed, Nov
A-yup. Got burned this too some time ago myself. If you do accidentally try to
bootstrap a seed node, the solution is to run repair after adding the new node
but before removing the old one. However, during this time the node will
advertise itself as owning a range, but when queried, it'll retu
On Tue, Nov 26, 2013 at 9:48 AM, Christopher J. Bottaro <
cjbott...@academicworks.com> wrote:
> One thing that I didn't mention, and I think may be the culprit after
> doing a lot or mailing list reading, is that when we brought the 4 new
> nodes into the cluster, they had themselves listed in the
We ran repair -pr on each node after we realized there was data loss and we
added the 4 original nodes back in the cluster. I.e. we ran repair on the
8 node cluster that consisted of the 4 old and 4 new nodes, once we
realized there was a problem.
We are using quorum reads and writes.
One thing
TL;DR you need to run repair in between doing those two things.
Full explanation:
https://issues.apache.org/jira/browse/CASSANDRA-2434
https://issues.apache.org/jira/browse/CASSANDRA-5901
Thanks,
-Jeremiah Jordan
On Nov 25, 2013, at 11:00 AM, Christopher J. Bottaro
wrote:
> Hello,
>
> We rec
That sounds bad! Did you run repair at any stage? Which CL are you reading
with?
/Janne
On 25 Nov 2013, at 19:00, Christopher J. Bottaro
wrote:
> Hello,
>
> We recently experienced (pretty severe) data loss after moving our 4 node
> Cassandra cluster from one EC2 availability zone to an
Hello,
We recently experienced (pretty severe) data loss after moving our 4 node
Cassandra cluster from one EC2 availability zone to another. Our strategy
for doing so was as follows:
- One at a time, bring up new nodes in the new availability zone and
have them join the cluster.
- One