You could try...
- delete / move the system data directory
- set the initial_token for each node to what they were before
- restart and recreate the schema
- run repair and then clean
It would have been a good idea to drain the nodes, this would checkpoint the
logs and clear them.
If you do not
Why don't you just add new node to ring and removetoken of the bad one?
2011/4/27 maneela a
>
> Hi,
> I had a 2 node cassandra cluster with replication factor 2 and
> OrderPreservingPartitioner but we did not provide InitialToken in
> the configuration files. One of the node was affected in the
Hi,
I had a 2 node cassandra cluster with replication factor 2 and
OrderPreservingPartitioner but we did not provide InitialToken in
the configuration files. One of the node was affected in the recent AWS EBS
outage and had been partitioned from cluster. However, I continued to allowed
all writ