@cassandra.apache.org
Subject: [EXTERNAL] Bring 2 nodes down
Hi,
I have a cluster of 8 Nodes, 4 physical machines 2 VMs each physical machine.
RF=3, and we have read/write with QUORUM consistency requirement.
One of the machines needs to be down for an hour or two to fix local disk.
What is the best way to do
Options, in order of desirability:
The "right" way to configure such a setup when creating the cluster would
be to define each physical machine as a "rack", and then use the right
replication/snitch configurations to give you rack awareness, so you
wouldn't have 2 replicas on the same physical mac
Hi,
I have a cluster of 8 Nodes, 4 physical machines 2 VMs each physical
machine.
RF=3, and we have read/write with QUORUM consistency requirement.
One of the machines needs to be down for an hour or two to fix local disk.
What is the best way to do that with out losing data?
Regards
-- Alaa
--
Thanks all for your thorough explanation.
-- --
??: "Anuj Wadehra";;
: 2017??7??28??(??) 0:49
??: "User cassandra.apache.org"; "Peng
Xiao"<2535...@qq.com>;
: Re: ?????? ?? t olerat
Hi Peng,
Racks can be logical (as defined with RAC attribute in Cassandra configuration
files) or physical (racks in server rooms).
In my view, for leveraging racks in your case, its important to understand the
implication of following decisions:
1. Number of distinct logical RACs defined in C
Note that if you use more racks than RF you lose some of the operational
benefit. e.g: you'll still only be able to take out one rack at a time
(especially if using vnodes), despite the fact that you have more racks
than RF. As Jeff said this may be desirable, but really it comes down to
what your
On 2017-07-26 19:38 (-0700), "Peng Xiao" <2535...@qq.com> wrote:
> Kurt/All,
>
>
> why the # of racks should be equal to RF?
>
> For example,we have 2 DCs each 6 machines with RF=3,each machine virtualized
> to 8 vms ,
> can we set 6 racs with RF3? I mean one machine one RAC to avoid hardwa
-- --
??: "Anuj Wadehra";;
: 2017??7??27??(??) 1:41
??: "Brooke Thorley";
"user@cassandra.apache.org";
: "Peng Xiao"<2535...@qq.com>;
: Re: ?? tolerate how many nodes d
ld be the last things to worry about.
-- 原始邮件 -- 发件人: "Bhuvan
Rawal";;发送时间: 2017年7月24日(星期一) 晚上7:17收件人:
"user"; 主题: Re: tolerate how many nodes down in the
cluster
Hi Peng ,
This really depends on how you have configured your topology. Say if
自己的邮箱";<2535...@qq.com>;
发送时间: 2017年7月26日(星期三) 晚上7:31
收件人: "user";
抄送: "anujw_2...@yahoo.co.in";
主题: 回复: 回复: tolerate how many nodes down in the cluster
One more question.why the # of racks should be equal to RF?
For example,we have 4 machines,each virtualized to
星期三) 上午10:32
收件人: "user";
抄送: "anujw_2...@yahoo.co.in";
主题: 回复: 回复: tolerate how many nodes down in the cluster
Thanks for the remind,we will setup a new DC as suggested.
-- 原始邮件 --
发件人: "kurt greaves";;
发送时间: 2017年7月26日(星期三) 上午1
Thanks for the remind,we will setup a new DC as suggested.
-- 原始邮件 --
发件人: "kurt greaves";;
发送时间: 2017年7月26日(星期三) 上午10:30
收件人: "User";
抄送: "anujw_2...@yahoo.co.in";
主题: Re: 回复: tolerate how many nodes down in the cluster
Keep i
Keep in mind that you shouldn't just enable multiple racks on an existing
cluster (this will lead to massive inconsistencies). The best method is to
migrate to a new DC as Brooke mentioned.
ng Xiao"<2535...@qq.com>;
: Re: ?? tolerate how many nodes down in the cluster
I've never really understood why Datastax recommends against racks. In those
docs they make it out to be much more difficult than it actually is to
configure and manage racks.
The important thing to ke
I've never really understood why Datastax recommends against racks. In
those docs they make it out to be much more difficult than it actually is
to configure and manage racks.
The important thing to keep in mind when using racks is that your # of
racks should be equal to your RF. If you have keysp
nsure to ensure that racks will be distributing data
> correctly and evenly. At times when clusters need immediate expansion,
> racks should be the last things to worry about.
>
>
>
>
>
> -- 原始邮件 --
> *发件人:* "Bhuvan Rawal";;
> *发送时间:* 2017年7月24日(星期
发件人: "Bhuvan
Rawal";;发送时间: 2017年7月24日(星期一) 晚上7:17收件人:
"user"; 主题: Re: tolerate how many nodes down in the
cluster
Hi Peng ,
This really depends on how you have configured your topology. Say if you have
segregated your dc into 3 racks with 10 servers each. With RF of 3 you
uot;Bhuvan Rawal";;
发送时间: 2017年7月24日(星期一) 晚上7:17
收件人: "user";
主题: Re: tolerate how many nodes down in the cluster
Hi Peng ,
This really depends on how you have configured your topology. Say if you have
segregated your dc into 3 racks with 10 servers each. With RF of 3 you can
s
> Hi,
>
> Suppose we have a 30 nodes cluster in one DC with RF=3,
> how many nodes can be down?can we tolerate 10 nodes down?
> it seems that we are not able to avoid the data distribution 3 replicas
> in the 10 nodes?,
> then we can only tolerate 1 node down even we have 30 nodes?
&
Hi,
Suppose we have a 30 nodes cluster in one DC with RF=3,
how many nodes can be down?can we tolerate 10 nodes down?
it seems that we are not able to avoid the data distribution 3 replicas in the
10 nodes?,
then we can only tolerate 1 node down even we have 30 nodes?
Could anyone please
22
>>>> INFO [main] 2016-07-25 21:58:47,030 StorageService.java:1902 - Node
>>>> ip-10-4-43-66.ec2.internal/10.4.43.66 state jump to NORMAL
>>>> INFO [main] 2016-07-25 21:58:47,096 CassandraDaemon.java:644 - Waiting
>>>> for gossip to settle before accep
t; Updating topology for /10.4.68.222
>>> INFO [GossipStage:1] 2016-07-25 21:58:47,497 Gossiper.java:1028 - Node /
>>> 10.4.54.176 has restarted, now UP
>>> INFO [GossipStage:1] 2016-07-25 21:58:47,544 StorageService.java:1902 -
>>> Node /10.4.54.176 state jump to NOR
O [GossipStage:1] 2016-07-25 21:58:47,497 Gossiper.java:1028 - Node /
>> 10.4.54.176 has restarted, now UP
>> INFO [GossipStage:1] 2016-07-25 21:58:47,544 StorageService.java:1902 -
>> Node /10.4.54.176 state jump to NORMAL
>> INFO [HANDSHAKE-/10.4.54.176] 2016-07-25 21:58:47,54
2 -
> Node /10.4.54.176 state jump to NORMAL
> INFO [HANDSHAKE-/10.4.54.176] 2016-07-25 21:58:47,548
> OutboundTcpConnection.java:515 - Handshaking version with /10.4.54.176
> INFO [GossipStage:1] 2016-07-25 21:58:47,594 TokenMetadata.java:429 -
> Updating topology for /10.4.54.176
> I
[GossipTasks:1] 2016-07-25 21:58:47,678 FailureDetector.java:287 -
Not marking nodes down due to local pause of 43226235115 > 50
INFO [HANDSHAKE-/10.4.43.65] 2016-07-25 21:58:47,679
OutboundTcpConnection.java:515 - Handshaking version with /10.4.43.65
INFO [GossipStage:1] 2016-07-25 21
Hello Gabriel,
On Sun, Nov 16, 2014 at 7:25 AM, Gabriel Menegatti
wrote:
> I said that load was not a big deal, because ops center shows this loads as
> green, not as yellow or red at all.
>
> Also, our servers have many processors/threads, so I *think* this load is
> not problematic.
I've seen
Hi Eric,
Thanks for your reply.
I said that load was not a big deal, because ops center shows this loads as
green, not as yellow or red at all.
Also, our servers have many processors/threads, so I *think* this load is not
problematic.
My assumption is that for some reason the DC2 10 nodes are
> load average on DC1 nodes are around 3-5 and on DC2 around 7-10
Anecdotally I can say that loads in the 7-10 range have been dangerously
high. When we had a cluster running in this range, the cluster was falling
behind on important tasks such as compaction, and we really struggled to
successful
Hello,
We are using Cassandra 2.1.2 in a multi dc cluster (30 servers on DC1 and
10 on DC2) with a key space replication factor of 1 on DC1 and 2 on DC2.
For some reason when we increase the volume of write requests on DC1 (using
ONE or LOCAL_ONE), the Cassandra java process on DC2 nodes goes dow
On Sat, May 11, 2013 at 6:56 AM, Rodrigo Felix
wrote:
> What does the first line of bin/nodetool ring output means? It has the same
> token of the down node.
No, it doesn't. It is displaying the token of the highest node to
indicate to you that the token range is a ring, and that the first
node i
On Sat, May 11, 2013 at 7:26 PM, Rodrigo Felix <
rodrigofelixdealme...@gmail.com> wrote:
> es the first line of bin/nodetool ring o
Fix the number of mapper and reducer according to your hardware
*Thanks & Regards*
∞
Shashwat Shriparv
I`m running YCSB and sometimes, when the load is heavy, some nodes get down.
What does the first line of bin/nodetool ring output means? It has the same
token of the down node. For each down node, is it expected to be shown a
line like the first one (with most columns empty)?
Address DC
:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/All-nodes-down-even-though-ring-shows-up-tp6274152p6274152.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
33 matches
Mail list logo