On Fri, Feb 7, 2014 at 2:50 PM, Vasileios Vlachos <
vasileiosvlac...@gmail.com> wrote:
> So basically all is down to which keys the node will end up being
> responsible for. It could end up holding 100% of the data (that is
> including replicas if I read it right). Looks like ensuring that a node
Thanks for the link Rob,
So basically all is down to which keys the node will end up being
responsible for. It could end up holding 100% of the data (that is
including replicas if I read it right). Looks like ensuring that a node is
capable of holding 100% of your data is necessary...
Bill
On F
On Fri, Feb 7, 2014 at 5:18 AM, Vasileios Vlachos <
vasileiosvlac...@gmail.com> wrote:
> Say you have 6 nodes with 300G each. So you decommission N1 and you bring
> it back in with Vnodes. Is that going to stream back 90%+ of the 300Gx6, or
> it eventually will hold the 90%+ of all the data stored
Thanks for you input.
Yes, you can mix Vnode-enabled and Vnode-disabled nodes. What you described
is exactly what happened. We had a node which was responsible for 90%+ of
the load. What is the actual result of this though?
Say you have 6 nodes with 300G each. So you decommission N1 and you bring
@Bill
An other DC for this migration is the less impacting way to do it. You set
a new cluster, switch to it when it's ready. No performance or down time
issues.
Decommissioning a node is quite an heavy operation since it will give part
of its data to all the remaining nodes, increasing network,
My understanding is you can't mix vnodes and regular nodes in the same DC.
Is it correct?
On Thu, Feb 6, 2014 at 2:16 PM, Vasileios Vlachos <
vasileiosvlac...@gmail.com> wrote:
> Hello,
>
> My question is why would you need another DC to migrate to Vnodes? How
> about decommissioning each node i
Hello,
My question is why would you need another DC to migrate to Vnodes? How
about decommissioning each node in turn, changing the cassandra.yaml
accordingly, delete the data and bring the node back in the cluster and let
it bootstrap from the others?
We did that recently with our demo cluster.
Glad it helps.
Good luck with this.
Cheers,
Alain
2014-02-06 17:30 GMT+01:00 Katriel Traum :
> Thank you Alain! That was exactly what I was looking for. I was worried
> I'd have to do a rolling restart to change the snitch.
>
> Katriel
>
>
>
> On Thu, Feb 6, 2014 at 1:10 PM, Alain RODRIGUEZ w
Thank you Alain! That was exactly what I was looking for. I was worried I'd
have to do a rolling restart to change the snitch.
Katriel
On Thu, Feb 6, 2014 at 1:10 PM, Alain RODRIGUEZ wrote:
> Hi, we did this exact same operation here too, with no issue.
>
> Contrary to Paulo we did not modify
Hi, we did this exact same operation here too, with no issue.
Contrary to Paulo we did not modify our snitch.
We simply added a "dc_suffix" in the property in
cassandra-rackdc.properties conf file for nodes in the new cluster :
# Add a suffix to a datacenter name. Used by the Ec2Snitch and
Ec2Mu
We had a similar situation and what we did was first migrate the 1.1
cluster to GossipingPropertyFileSnitch, making sure that for each node we
specified the correct availability zone as the rack in
the cassandra-rackdc.properties. In this way,
the GossipingPropertyFileSnitch is equivalent to the EC
Hello list.
I'm upgrading a 1.1 cassandra cluster to 1.2(.13).
I've read here and in other places that the best way to migrate to vnodes
is to add a new DC, with the same amount of nodes, and run rebuild on each
of them.
However, I'm faced with the fact that I'm using EC2MultiRegion snitch,
which
12 matches
Mail list logo