No, I did not.
On 24 Jun 2015, at 06:05, Jason Wee
mailto:peich...@gmail.com>> wrote:
on the node 192.168.2.100, did you run repair after its status is UN?
On Wed, Jun 24, 2015 at 2:46 AM, Jean Tremblay
mailto:jean.tremb...@zen-innovations.com>>
wrote:
Dear Alain,
Thank you for your reply.
on the node 192.168.2.100, did you run repair after its status is UN?
On Wed, Jun 24, 2015 at 2:46 AM, Jean Tremblay <
jean.tremb...@zen-innovations.com> wrote:
> Dear Alain,
>
> Thank you for your reply.
>
> Ok, yes I did not drain. The cluster was loaded with tons of records,
> and no new re
Dear Alain,
Thank you for your reply.
Ok, yes I did not drain. The cluster was loaded with tons of records, and no
new records were added since few weeks. Each node had a load of about 160 GB as
seen in the “nodetool status". I killed the cassandradeamon, and restarted it.
After cassandra was
Hi Jean,
"I had to reboot a node. I killed the cassandra process on that node". You
should drain the node before killing java (or using any service stop
command). This is not what causes the issue yet it will help you to keep
consistence if you use counters, and make the reboot faster in any cases
Does anyone know what to do when such an event occurs?
Does anyone know how this could happen?
I would have tried repairing the node with nodetool repair but that takes much
too long… I need my cluster to work for our online system. At this point
nothing is working. It’s like the whole cluster i
Hi,
I have a cluster with 5 nodes running Cassandra 2.1.6.
I had to reboot a node. I killed the cassandra process on that node. Rebooted
the machine, and restarted Cassandra.
~/apache-cassandra-DATA/data:321>nodetool status
Datacenter: datacenter1
===
Status=Up/Down
|/ State