I don't like your cunning plan. Don't drop the system auth and distributed
keyspaces, instead just change them to NTS and then do your replacement for
each down node.
If you're actually using auth and worried about consistency I believe 3.11
has the feature to be able to exclude nodes during a re
Another question, Is there a management tool to do nodetool cleanup one by one
(wait until finish of cleaning up one node then start clean up for the next
node in cluster)? On Sat, 22 Sep 2018 16:02:17 +0330 onmstester onmstester
wrote I have a cunning plan (Baldrick wise) to solve
t
I have a cunning plan (Baldrick wise) to solve this problem: stop client
application run nodetool flush on all nodes to save memtables to disk stop
cassandra on all of the nodes rename original Cassandra data directory to
data-old start cassandra on all the nodes to create a fresh cluster includ
Thanks, I am still thinking about it, but before going deeper, is this still an
issue for you at the moment? Yes, It is.
Hello
Also i could delete system_traces which is empty anyway, but there is a
> system_auth and system_distributed keyspace too and they are not empty,
> Could i delete them safely too?
I would say no, not safely, as I am not sure about some of them, but maybe
this would work. Here is what I kno
Thanks Alain, First here it is more detail about my cluster: 10 racks + 3 nodes
on each rack nodetool status: shows 27 nodes UN and 3 nodes all related to
single rack as DN version 3.11.2 Option 1: (Change schema and) use replace
method (preferred method) * Did you try to have the replace going,
Hello,
I am sorry it took us (the community) more than a day to answer to this
rather critical situation. That being said, my recommendation at this point
would be for you to make sure about the impacts of whatever you would try.
Working on a broken cluster, as an emergency might lead you to a sec
Any idea? Sent using Zoho Mail On Sun, 09 Sep 2018 11:23:17 +0430
onmstester onmstester wrote Hi, Cluster Spec: 30
nodes RF = 2 NetworkTopologyStrategy GossipingPropertyFileSnitch + rack aware
Suddenly i lost all disks of cassandar-data on one of my racks, after replacing
the disks,
Hi, Cluster Spec: 30 nodes RF = 2 NetworkTopologyStrategy
GossipingPropertyFileSnitch + rack aware Suddenly i lost all disks of
cassandar-data on one of my racks, after replacing the disks, tried to replace
the nodes with same ip using this:
https://blog.alteroot.org/articles/2014-03-12/replace
Hmm, I don't think we use join_ring=false or write_survey=true for that
node. I already remove_node to take the bad node out of ring, will try to
have more debug logs next time.
Thanks.
On Sun, Nov 20, 2016 at 2:31 PM, Paulo Motta
wrote:
> Is there any chance the replaced node recently resumed
Is there any chance the replaced node recently resumed bootstrap, joined
with join_ring=false or write_survey=true? If so, perhaps this could be
related to CASSANDRA-12935.
Otherwise gossip tokens being empty is definitely unexpected behavior and
you should probably file another ticket with more d
Paulo, the tokens field for 2401:db00:2130:4091:face:0:13:0 shows "TOKENS:
not present", on all live nodes. It means tokens are missing, right? What
would cause this?
Thanks.
Dikang.
On Fri, Nov 18, 2016 at 11:15 AM, Paulo Motta
wrote:
> What does nodetool gossipinfo shows for endpoint /2401:db
What does nodetool gossipinfo shows for endpoint
/2401:db00:2130:4091:face:0:13:0 ? Does it contain the TOKENS attribute? If
it's missing, is it only missing on this node or other nodes as well?
2016-11-18 17:02 GMT-02:00 Dikang Gu :
> Hi, I encountered couple times that I could not replace a dow
Hi, I encountered couple times that I could not replace a down node due to
error:
2016-11-17_19:33:58.70075 Exception (java.lang.RuntimeException)
encountered during startup: Could not find tokens for
/2401:db00:2130:4091:face:0:13:0 to replace
2016-11-17_19:33:58.70489 ERROR 19:33:58 [main]: Exce
14 matches
Mail list logo