I'm not aware of a clean way of doing this other than restart Cassandra on
the nodes involved in the move. You can determine the nodes/replicas
involved by running nodetool netstats. Cheers!
Hi all,
Is there a way to stop a nodetool move that is currently in progress?
It is not moving the data between the nodes as expected and I would like to
stop it before it completes.
Thank you
Paul
-
To unsubscribe, e
another sotrage/index?
>
> Thanks
>
>
> On Wed, Oct 10, 2018 at 1:30 AM Anup Shirolkar <
> anup.shirol...@instaclustr.com> wrote:
>
>> Hi,
>>
>> Yes it makes sense to move to 3.11.3
>> The release has features and bug fixes which should be useful to y
n the wors case. Any comments? Are there any better
approach that not involve moving data into another sotrage/index?
Thanks
On Wed, Oct 10, 2018 at 1:30 AM Anup Shirolkar <
anup.shirol...@instaclustr.com> wrote:
> Hi,
>
> Yes it makes sense to move to 3.11.3
> The release
Hi,
Yes it makes sense to move to 3.11.3
The release has features and bug fixes which should be useful to your
cluster.
However, if you are planning to use GROUP_BY, UDFs etc.
Please be cautious about the performance implications it may cause if not
done with suitable queries.
I am not aware
Hi list,
We recently upgraded our small cluster to the latest 3.0.17. Everything was
nice and smooth, however I am wondering if ti make sense to keep moving
forward and upgarde to the latest 3.11.3?
We really need something like the GROUP_BY and UFF/UDA seems limited wrt
our use-case.
Does it ma
Hi Guys,
Sincerely I can't believe the poor log description cassandra has. I'm
really annoyed of it. I'll be very grateful if someone can tell me what
I'm doing wrong.These are the system.log
DEBUG [MigrationStage:1] 2017-08-01 19:05:32,316 MigrationManager.java:559
- Gossiping my schema version
try with bootstrap true in that case. Start the seed node first. I think it
should work.
On Tue, Aug 1, 2017 at 4:54 PM, Lucas Alvarez wrote:
> I'm sorry the ip address of this node in the configuration was
> 10.29.32.141.
> num_tokens is set to 256
> initial_token is commented.
>
> The server h
I'm sorry the ip address of this node in the configuration was 10.29.32.141
.
num_tokens is set to 256
initial_token is commented.
The server has been just installed.
Thanks for your help
2017-08-01 17:01 GMT-03:00 Nitan Kainth :
> If it is blank, you can use bootstrap: true.
>
> What is num_tok
If it is blank, you can use bootstrap: true.
What is num_tokens and initial token values?
> On Aug 1, 2017, at 2:42 PM, Lucas Alvarez wrote:
>
> Hi, I'm trying to configure Cassandra as a cluster with two nodes. When
> trying to simple start the first node just changing this parameters:
>
> l
10.29.30.2 does not appear to be the IP of the node, if you got "Node
/10.29.32.141 state jump to NORMAL" as the first logged state change
from StorageService. Usually this first entry is the node's local IP
address. Later in the log, you'll see OutboundTcpConnection handshakes
and state change fro
Hi, I'm trying to configure Cassandra as a cluster with two nodes. When
trying to simple start the first node just changing this parameters:
listen_address: 10.29.30.2
seed_provider: 10.29.30.2
rpc_address: 10.29.30.2
auto_bootstrap: false
I'm getting this message and then cassandra stop loading:
> Hi,
>
>
>
> Can anyone help me. I’m trying (and failing) to move my 3 node C* data
> from my Production Environment to my Development 3 node cluster.
>
>
>
> Here is the fine print…
>
>
>
> Oracle Linux 7.3
>
> C* 3.0.11
>
>
>
>
Hi,
Can anyone help me. I'm trying (and failing) to move my 3 node C* data from my
Production Environment to my Development 3 node cluster.
Here is the fine print...
Oracle Linux 7.3
C* 3.0.11
3 Nodes ((virtual Nodes 256))
1 Keyspace (replication factor 3) Quorum Consistency
1 table
Sna
On Thu, Sep 24, 2015 at 1:29 AM, Rock Zhang wrote:
> on host A, if i run command "nodetool move -9096422322933500933" , what
> gonna happen ?
> Move data associated with token "-9096422322933500933" from where to where
> ?
>
When you change the token of a n
Hi All,
I want to manually move some token from one hostA to host B, i do not fully
understand this command,
* move- Move node on the token ring to a new token*
Say host A has token: (i got it with nodetool info -T)
Token : -9096422322933500933
Token
lume of data that needs migrating won’t be
>> huge, probably about 30G, but it is data that I definitely need to keep
>> (for historical analysis, audit etc).
>>
>>
>>
>> Thanks!
>>
>> Matthew
>>
>>
>>
>>
>>
>>
&g
ay 2015 14:38
> *To:* user@cassandra.apache.org
> *Subject:* Re: Start with single node, move to 3-node cluster
>
>
>
> will you add this lent one node into the 3N to form a cluster? but really
> , if you are just started, you could use this one node for your learning by
>
won’t be
huge, probably about 30G, but it is data that I definitely need to keep
(for historical analysis, audit etc).
Thanks!
Matthew
*From:* Jason Wee [mailto:peich...@gmail.com]
*Sent:* 26 May 2015 14:38
*To:* user@cassandra.apache.org
*Subject:* Re: Start with single node, move to 3
wrote:
> Hi gurus,
>
>
>
> We have ordered some hardware for a 3-node cluster, but its ETA is 6 to 8
> weeks. In the meantime, I have been lent a single server that I can use. I
> am wondering what the best way is to set up my single node (SN), so I can
> then move to the
Hi gurus,
We have ordered some hardware for a 3-node cluster, but its ETA is 6 to 8
weeks. In the meantime, I have been lent a single server that I can use. I
am wondering what the best way is to set up my single node (SN), so I can
then move to the 3-node cluster (3N) when the hardware arrives
On Tue, Dec 23, 2014 at 12:29 AM, Jiri Horky wrote:
> just a follow up. We've seen this behavior multiple times now. It seems
> that the receiving node loses connectivity to the cluster and thus
> thinks that it is the sole online node, whereas the rest of the cluster
> thinks that it is the only
gt; we added a new node to existing 8-nodes cluster with C* 1.2.9 without
> vnodes and because we are almost totally out of space, we are shuffling
> the token fone node after another (not in parallel). During one of this
> move operations, the receiving node died and thus the streaming faile
Hi list,
we added a new node to existing 8-nodes cluster with C* 1.2.9 without
vnodes and because we are almost totally out of space, we are shuffling
the token fone node after another (not in parallel). During one of this
move operations, the receiving node died and thus the streaming failed
tion, but... it’s best to
>> first step back and look at the big picture of what the data actually looks
>> like as well as how you want to query it.
>>
>> -- Jack Krupansky
>>
>> *From:* Les Hartzman
>> *Sent:* Friday, September 19, 2014 5:46 PM
>&
y, September 19, 2014 5:46 PM
> *To:* user@cassandra.apache.org
> *Subject:* Help with approach to remove RDBMS schema from code to move to
> C*?
>
> My company is using an RDBMS for storing time-series data. This
> application was developed before Cassandra and NoSQL. I'
@cassandra.apache.org
Subject: Help with approach to remove RDBMS schema from code to move to C*?
My company is using an RDBMS for storing time-series data. This application was
developed before Cassandra and NoSQL. I'd like to move to C*, but ...
The application supports data coming from multiple mode
: Les Hartzman
To: user@cassandra.apache.org
Sent: Friday, September 19, 2014 2:46 PM
Subject: Help with approach to remove RDBMS schema from code to move to C*?
My company is using an RDBMS for storing time-series data. This application was
developed before Cassandra and NoSQL. I'd like to
My company is using an RDBMS for storing time-series data. This application
was developed before Cassandra and NoSQL. I'd like to move to C*, but ...
The application supports data coming from multiple models of devices.
Because there is enough variability in the data, the main table to hol
Hi Rob,
THX for you response and link to the issue.
The move did complete after a restart!
Cheers,
~Jason
***
From: Robert Coli mailto:rc...@eventbrite.com>>
Reply-To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>
On Wed, Jun 4, 2014 at 2:34 PM, Jason Tyler wrote:
> I wrote 'apparent progress' because it reports “MOVING” and the Pending
> Commands/Responses are changing over time. However, I haven’t seen the
> individual .db files progress go above 0%.
>
Your move is hung. Restart
4, 2014 at 2:34 PM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Cc: Francois Richard mailto:frich...@yahoo-inc.com>>
Subject: nodetool move seems slow
Hello,
We have a 5-node cluster runing cassandra 1.2.16, w
rack1 Up Normal 2.86 TB 80.00%
5534023222112865484
10.198.xx.xx5 rack1 Up Moving 2.32 TB 66.77%
6783174585269344219
The first three nodes (.xx1 - .xx3 above) were at the desired tokens, so I
issued a move on .xx4:
nodetool
On Wed, Apr 16, 2014 at 4:57 AM, Oleg Dulin wrote:
> I need to rebalance my cluster. I am sure this question has been asked
> before -- will 1.2 continue to serve reads and writes correctly while move
> is in progress ?
>
Yes, but "move" is subject to CASSANDRA-2434 unti
I have recently tested this scenario under a couple versions of Cassandra and
have been able to write and read to/from the cluster while performing a move.
I performed these tests utilizing an RF=2 on a three node cluster while
performing quorum reads and received no errors due to unavailable
On 16 April 2014 05:08, Jonathan Lacefield wrote:
> Assuming you have enough nodes not undergoing "move" to meet your CL
> requirements, then yes, your cluster will still accept reads and writes.
> However, it's always good to test this before doing it in production to
&
Assuming you have enough nodes not undergoing "move" to meet your CL
requirements, then yes, your cluster will still accept reads and writes.
However, it's always good to test this before doing it in production to
ensure your cluster and app will function as designed.
Jonathan Lace
I need to rebalance my cluster. I am sure this question has been asked
before -- will 1.2 continue to serve reads and writes correctly while
move is in progress ?
Need this for my sanity.
--
Regards,
Oleg Dulin
http://www.olegdulin.com
On Fri, Feb 7, 2014 at 2:50 PM, Vasileios Vlachos <
vasileiosvlac...@gmail.com> wrote:
> So basically all is down to which keys the node will end up being
> responsible for. It could end up holding 100% of the data (that is
> including replicas if I read it right). Looks like ensuring that a node
Thanks for the link Rob,
So basically all is down to which keys the node will end up being
responsible for. It could end up holding 100% of the data (that is
including replicas if I read it right). Looks like ensuring that a node is
capable of holding 100% of your data is necessary...
Bill
On F
On Fri, Feb 7, 2014 at 5:18 AM, Vasileios Vlachos <
vasileiosvlac...@gmail.com> wrote:
> Say you have 6 nodes with 300G each. So you decommission N1 and you bring
> it back in with Vnodes. Is that going to stream back 90%+ of the 300Gx6, or
> it eventually will hold the 90%+ of all the data stored
Thanks for you input.
Yes, you can mix Vnode-enabled and Vnode-disabled nodes. What you described
is exactly what happened. We had a node which was responsible for 90%+ of
the load. What is the actual result of this though?
Say you have 6 nodes with 300G each. So you decommission N1 and you bring
@Bill
An other DC for this migration is the less impacting way to do it. You set
a new cluster, switch to it when it's ready. No performance or down time
issues.
Decommissioning a node is quite an heavy operation since it will give part
of its data to all the remaining nodes, increasing network,
My understanding is you can't mix vnodes and regular nodes in the same DC.
Is it correct?
On Thu, Feb 6, 2014 at 2:16 PM, Vasileios Vlachos <
vasileiosvlac...@gmail.com> wrote:
> Hello,
>
> My question is why would you need another DC to migrate to Vnodes? How
> about decommissioning each node i
Hello,
My question is why would you need another DC to migrate to Vnodes? How
about decommissioning each node in turn, changing the cassandra.yaml
accordingly, delete the data and bring the node back in the cluster and let
it bootstrap from the others?
We did that recently with our demo cluster.
Glad it helps.
Good luck with this.
Cheers,
Alain
2014-02-06 17:30 GMT+01:00 Katriel Traum :
> Thank you Alain! That was exactly what I was looking for. I was worried
> I'd have to do a rolling restart to change the snitch.
>
> Katriel
>
>
>
> On Thu, Feb 6, 2014 at 1:10 PM, Alain RODRIGUEZ w
Thank you Alain! That was exactly what I was looking for. I was worried I'd
have to do a rolling restart to change the snitch.
Katriel
On Thu, Feb 6, 2014 at 1:10 PM, Alain RODRIGUEZ wrote:
> Hi, we did this exact same operation here too, with no issue.
>
> Contrary to Paulo we did not modify
Hi, we did this exact same operation here too, with no issue.
Contrary to Paulo we did not modify our snitch.
We simply added a "dc_suffix" in the property in
cassandra-rackdc.properties conf file for nodes in the new cluster :
# Add a suffix to a datacenter name. Used by the Ec2Snitch and
Ec2Mu
Thank you Rob this is very helpful. I'll keep you posted on any progress.
Are others running some what large nodes on CentOS 6.4 or similar? Using java
7? We are also hosted through SoftLayer? Any help is much appreciated.
In general I think Cassandra meets our needs but this is a blocker fo
On Wed, Feb 5, 2014 at 11:22 AM, Keith Wright wrote:
> Also there is one more option which is we could upgrade to 2.0 in the
> hopes that our issue is fixed as part of the streaming overhaul. But
> seeing as this is a production cluster and 2.0 does not yet appear
> production ready, that makes
On Wed, Feb 5, 2014 at 11:18 AM, Keith Wright wrote:
> Hi Rob, thanks for the response! Interestingly if we run a repair we
> don't see the bootstrap issue so I am considering doing the empty node
> repair methodology.
>
Weird. Bootstrap should not be more fragile than repair.
>
>- Update
ra.apache.org>>
Cc: Don Jackson mailto:djack...@nanigans.com>>, Dave
Carroll mailto:dcarr...@nanigans.com>>
Subject: Re: Move to smaller nodes
Hi Rob, thanks for the response! Interestingly if we run a repair we don’t see
the bootstrap issue so I am considering doing the empty node repair
m
5, 2014 at 2:10 PM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Cc: Don Jackson mailto:djack...@nanigans.com>>, Dave
Carroll mailto:dcarr...@nanigans.com>>
Subject: Re: Move to smaller nodes
http://www.palominodb.com/blog/2012/09/25/bulk-loading-options-cassandra
On Wed, Feb 5, 2014 at 11:00 AM, Keith Wright wrote:
> Earlier today I emailed about issues we're having bootstrapping nodes
> into our existing cluster. One theory we have is that our nodes are simply
> too large and are considering moving to more, smaller nodes. However,
> because we cann
Hi all,
Earlier today I emailed about issues we’re having bootstrapping nodes into
our existing cluster. One theory we have is that our nodes are simply too
large and are considering moving to more, smaller nodes. However, because we
cannot bootstrap it makes it difficult. As I see it, w
We had a similar situation and what we did was first migrate the 1.1
cluster to GossipingPropertyFileSnitch, making sure that for each node we
specified the correct availability zone as the rack in
the cassandra-rackdc.properties. In this way,
the GossipingPropertyFileSnitch is equivalent to the EC
Hello list.
I'm upgrading a 1.1 cassandra cluster to 1.2(.13).
I've read here and in other places that the best way to migrate to vnodes
is to add a new DC, with the same amount of nodes, and run rebuild on each
of them.
However, I'm faced with the fact that I'm using EC2MultiRegion snitch,
which
On Thu, Nov 7, 2013 at 3:58 PM, Daning Wang wrote:
> How to move a token to another node on 1.2.x? I have tried move command,
>
...
> We don't want to use cassandra-shuffle, because it put too much load on
> the server. we just want to move some tokens.
>
driftx on #cass
How to move a token to another node on 1.2.x? I have tried move command,
[cassy@dsat103.e1a ~]$ nodetool move 168755834953206242653616795390304335559
Exception in thread "main" java.io.IOException: target token
168755834953206242653616795390304335559 is already owned by anothe
The restart worked.
Thanks, Rob!
After the restart I ran 'nodetool move' again, used 'nodetool netstats | grep
-v "0%"' to verify that data was actively streaming, and the move completed
successfully.
-Ike
On Sep 10, 2013, at 11:04 AM, Ike Walker wrote:
>
wrote:
> On Mon, Sep 9, 2013 at 7:08 PM, Ike Walker wrote:
> I've been using nodetool move to rebalance my cluster. Most of the moves take
> under an hour, or a few hours at most. The current move has taken 4+ days so
> I'm afraid it will never complete. What's the be
On Mon, Sep 9, 2013 at 7:08 PM, Ike Walker wrote:
> I've been using nodetool move to rebalance my cluster. Most of the moves
> take under an hour, or a few hours at most. The current move has taken 4+
> days so I'm afraid it will never complete. What's the best way to c
I've been using nodetool move to rebalance my cluster. Most of the moves take
under an hour, or a few hours at most. The current move has taken 4+ days so
I'm afraid it will never complete. What's the best way to cancel it and try
again?
I'm running a cluster of 12 nodes at
>
> if you've got a non-vnode cluster and are trying to convert, you are
> likely going to at least want, if not have to, run shuffle
Fair enough. Running shuffle after upgrading to using vnodes is nearly
mandatory or else you'll run into troubles when adding more nodes (see this
Jira
ticket
Eric,
Unfortunately if you've got a non-vnode cluster and are trying to convert,
you are likely going to at least want, if not have to, run shuffle. It
isn't a pleasant situation when you run into that because in order for the
shuffle to execute safely and successfully you need to have essentiall
>
> vnodes currently do not brings any noticeable benefits to outweight trouble
The main advantage of vnodes is that it lets you have flexibility with
respect to adding and removing nodes from your cluster without having to
rebalance your cluster (issuing a lot of moves). A shuffle is a lot of
m
My understanding is that it is not possible to change the number of
tokens after the node has been initialized.
that was my conclusion too. vnodes currently do not brings any
noticeable benefits to outweight trouble. shuffle is very slow in large
cluster. Recovery is faster with vnodes but i h
> Pretty sure you can put the list in the yaml file too.
Yup, sorry.
initial_tokens can take a comma separated value
Cheers
-
Aaron Morton
Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 15/07/2013, at 9:44 AM, Eric Stevens wrote:
> My understand
My understanding is that it is not possible to change the number of tokens
after the node has been initialized. To do so you would first need to
decommission the node, then start it clean with the appropriate num_tokens
in the yaml.
On Fri, Jul 12, 2013 at 9:17 PM, Radim Kolar wrote:
> its pos
bulkloader . You could copy the
>> > SSTables from every node in the source system and bulk load them into the
>> > dest system. That process will ensure rows are sent to nodes that are
>> > replicas.
>> >
>> > Cheers
>> >
>> > -
its possible to change num_tokens on node with data?
i changed it and restarted node but it still has same amount in nodetool
status.
st system. That process will ensure rows are sent to nodes that are
> > replicas.
> >
> > Cheers
> >
> > -
> > Aaron Morton
> > Freelance Cassandra Consultant
> > New Zealand
> >
> > @aaronmorton
> > http://www.thelastpickle.co
>> > replicas.
>> >
>> > Cheers
>> >
>> > -
>> > Aaron Morton
>> > Freelance Cassandra Consultant
>> > New Zealand
>> >
>> > @aaronmorton
>> > http://www.thelastpickle.com
>
> Aaron Morton
> > Freelance Cassandra Consultant
> > New Zealand
> >
> > @aaronmorton
> > http://www.thelastpickle.com
> >
> > On 9/07/2013, at 12:45 PM, Baskar Duraikannu
> > wrote:
> >
> >> We have two clusters used by two
kle.com
>
> On 9/07/2013, at 12:45 PM, Baskar Duraikannu
> wrote:
>
>> We have two clusters used by two different groups with vnodes enabled. Now
>> there is a need to move some of the keyspaces from cluster 1 to cluster 2.
>>
>>
>> Can I just copy data f
es that are replicas.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 9/07/2013, at 12:45 PM, Baskar Duraikannu
wrote:
> We have two clusters used by two different groups with vnodes enabled. Now
> there
We have two clusters used by two different groups with vnodes enabled. Now
there is a need to move some of the keyspaces from cluster 1 to cluster 2.
Can I just copy data files for the required keyspaces, create schema manually
and run repair?
Anything else required? Please help.
--
Thanks
Zhu
mailto:wz1...@yahoo.com>>
Date: Tuesday, April 23, 2013 12:53 PM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: move data from Cassandra 1.1.6 to 1.2.4
Hi Dean,
It's a bit different case for us. We
hanks.
-Wei
From: "Hiller, Dean"
To: "user@cassandra.apache.org" ; Wei Zhu
Sent: Tuesday, April 23, 2013 11:17 AM
Subject: Re: move data from Cassandra 1.1.6 to 1.2.4
We went from 1.1.4 to 1.2.2 and in QA rolling restart failed but in pr
y, April 23, 2013 12:11 PM
To: Cassandr usergroup
mailto:user@cassandra.apache.org>>
Subject: move data from Cassandra 1.1.6 to 1.2.4
Hi,
We are trying to upgrade from 1.1.6 to 1.2.4, it's not really a live upgrade.
We are going to retire the old hardware and bring in a set of new hardware
Hi,
We are trying to upgrade from 1.1.6 to 1.2.4, it's not really a live upgrade.
We are going to retire the old hardware and bring in a set of new hardware for
1.2.4.
For old cluster, we have 5 nodes with RF = 3, total of 1TB data.
For new cluster, we will have 10 nodes with RF = 3. We will use
+1
Also where can I learn more about pyhtondra ?
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 31/01/2013, at 8:09 AM, Rob Coli wrote:
> On Wed, Jan 30, 2013 at 7:21 AM, Edward Capriolo
> wrote:
>> My suggestion:
On Wed, Jan 30, 2013 at 7:21 AM, Edward Capriolo wrote:
> My suggestion: At minimum we should re-route these questions to client-dev
> or simply say, "If it is not part of core Cassandra, you are looking in the
> wrong place for support"
+1, I find myself scanning past all those questions in orde
I totally agree.
-Vivek
On Wed, Jan 30, 2013 at 8:51 PM, Edward Capriolo wrote:
> A good portion of people and traffic on this list is questions about:
>
> 1) asytnax
> 2) cassandra-jdbc
> 3) cassandra native client
> 3) pyhtondra / whatever
>
> With the exception of the native transport which i
A good portion of people and traffic on this list is questions about:
1) asytnax
2) cassandra-jdbc
3) cassandra native client
3) pyhtondra / whatever
With the exception of the native transport which is only half way part of
Cassandra, none of the these other client issues have much to do with cor
wish them change places
nodetool -h d-st-n2 move 1
nodetool -h d-st-n2 cleanup
Here I expect for the second node to have a load of almost 0, but this does not
happen
10.111.1.141datacenter1 rack1 Up Normal 195.53 MB 100.00%
0
Look in the logs for errors or warnings. Also let us know what version you are
using.
Am guessing that node 2 still thought that node 1 was in the cluster when you
did the move. Which should(?) have errored.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http
Let me elaborate a bit.
two node cluster
node1 has token 0
node2 has token 85070591730234615865843651857942052864
node1 goes down perminently.
do a nodetool move 0 on node2.
monitor with ring... is in Moving state forever it seems.
From: Poziombka, Wade L
Sent: Tuesday, May 29, 2012 4:29
remove removetoken
-- Original --
From: "Poziombka, Wade L";
Date: 2012??5??30??(??) 5:29
To: "user@cassandra.apache.org";
Subject: nodetool move 0 gets stuck in "moving" state forever
If the node with token 0 dies
If the node with token 0 dies and we just want it gone from the cluster we
would do a nodetool move 0. Then we monitor using nodetool ring it seems to be
stuck on Moving forever.
Any ideas?
or that I simply used: nodetool -h move
>
> But it gives following error:
>
> Exception in thread "main" java.lang.UnsupportedOperationException: data is
> currently moving to this node; unable to leave the ring
> at
> org.apache.cassandra.service.Storag
it 33%,33% and
33%, for that I simply used: nodetool -h move
But it gives following error:
Exception in thread "main" java.lang.UnsupportedOperationException: data is
currently moving to this node; unable to leave the ring
at
org.apache.cassandra.service.StorageSe
I was moving around some nodes in my cluster but when I get one node there
appears an error:
"Error during move: The data partitions for node [IP] have not been
determined"
How to solve this problem?
--
View this message in context:
http://cassandra-user-incubator-apache-org.
efore
starting the move.
You may also see them after the streaming as completed and the new SSTables are
compacted with the older ones.
What is nodetool ring saying ?
The last thing that logs before the streaming starts is "Sleeping {} ms before
start streaming/fetching ranges".
Che
I've got some nodes in a "moving" state in a cluster (the nodes to which
they stream shouldn't overlap), and I'm finding it difficult to determine
if they're actually doing anything related to the move at this point, or if
they're stuck in the state and not act
Hi,
Here is what I do, starting from 4 0.6.13 nodes with RF 2:
- repair all the nodes
- compact all the nodes
- upgrade to 1.0
- scrub all the nodes
- repair all the nodes
- compact all the nodes
- move one node to a new token
For some requests, I don't get the same result before and afte
Running on cassandra 0.8.(6|7), I have issued two moves in the same cluster
at the same time, on two different nodes. There are no writes being issued
to the cluster.
I saw a mailing post mentioning doing moves one node at a time.
Did I just trash my cluster ?
Thanks
Initial state: 3 nodes, RF=3, version = 0.7.8, some queries are with
CL=QUORUM
1. Add node with with correct token for 4 nodes, repair
2. Move first node to balance 4 nodes, repair
3. Move second node
===> Start getting timeouts, Hector warning: WARNING - Er
> is there a techincal problem with running a nodetool move on a node while a
> cleanup is running?
Cleanup is removing data that the node is no longer responsible for while move
is first removing *all* data from the node and then streaming new data to it.
I'd put that in the c
While it would certainly be preferable to not run a cleanup and a move at
the same time on the same node, is there a techincal problem with running a
nodetool move on a node while a cleanup is running? Or if its possible to
gracefully kill a cleanup, so that a move can be run and then cleanup
1 - 100 of 159 matches
Mail list logo