RE: [EXTERNAL] Bring 2 nodes down

2017-12-28 Thread Durity, Sean R
@cassandra.apache.org Subject: [EXTERNAL] Bring 2 nodes down Hi, I have a cluster of 8 Nodes, 4 physical machines 2 VMs each physical machine. RF=3, and we have read/write with QUORUM consistency requirement. One of the machines needs to be down for an hour or two to fix local disk. What is the best way to do

Re: Bring 2 nodes down

2017-12-14 Thread Jeff Jirsa
Options, in order of desirability: The "right" way to configure such a setup when creating the cluster would be to define each physical machine as a "rack", and then use the right replication/snitch configurations to give you rack awareness, so you wouldn't have 2 replicas on the same physical mac

Bring 2 nodes down

2017-12-14 Thread Alaa Zubaidi (PDF)
Hi, I have a cluster of 8 Nodes, 4 physical machines 2 VMs each physical machine. RF=3, and we have read/write with QUORUM consistency requirement. One of the machines needs to be down for an hour or two to fix local disk. What is the best way to do that with out losing data? Regards -- Alaa --

?????? ?????? ?????? t olerate how many nodes down in the cluster

2017-07-27 Thread Peng Xiao
Thanks all for your thorough explanation. -- -- ??: "Anuj Wadehra";; : 2017??7??28??(??) 0:49 ??: "User cassandra.apache.org"; "Peng Xiao"<2535...@qq.com>; : Re: ?????? ?? t olerat

Re: 回复: 回复: tolerate how many nodes down in the cluster

2017-07-27 Thread Anuj Wadehra
Hi Peng,  Racks can be logical (as defined with RAC attribute in Cassandra configuration files) or physical (racks in server rooms).   In my view, for leveraging racks in your case, its important to understand the implication of following decisions: 1. Number of distinct logical RACs defined in C

Re: 回复: 回复: tolerate how many nodes down in the cluster

2017-07-26 Thread kurt greaves
Note that if you use more racks than RF you lose some of the operational benefit. e.g: you'll still only be able to take out one rack at a time (especially if using vnodes), despite the fact that you have more racks than RF. As Jeff said this may be desirable, but really it comes down to what your

Re: 回复: 回复: tolerate how many nodes down in the cluster

2017-07-26 Thread Jeff Jirsa
On 2017-07-26 19:38 (-0700), "Peng Xiao" <2535...@qq.com> wrote: > Kurt/All, > > > why the # of racks should be equal to RF? > > For example,we have 2 DCs each 6 machines with RF=3,each machine virtualized > to 8 vms , > can we set 6 racs with RF3? I mean one machine one RAC to avoid hardwa

?????? ?????? tolerate how many nodes down in the cluster

2017-07-26 Thread Peng Xiao
-- -- ??: "Anuj Wadehra";; : 2017??7??27??(??) 1:41 ??: "Brooke Thorley"; "user@cassandra.apache.org"; : "Peng Xiao"<2535...@qq.com>; : Re: ?? tolerate how many nodes d

Re: 回复: tolerate how many nodes down in the cluster

2017-07-26 Thread Anuj Wadehra
ld be the last things to worry about. -- 原始邮件 -- 发件人: "Bhuvan Rawal";;发送时间: 2017年7月24日(星期一) 晚上7:17收件人:  "user"; 主题: Re: tolerate how many nodes down in the cluster Hi Peng , This really depends on how you have configured your topology. Say if

回复: 回复: tolerate how many nodes down in the cluster

2017-07-26 Thread Peng Xiao
自己的邮箱";<2535...@qq.com>; 发送时间: 2017年7月26日(星期三) 晚上7:31 收件人: "user"; 抄送: "anujw_2...@yahoo.co.in"; 主题: 回复: 回复: tolerate how many nodes down in the cluster One more question.why the # of racks should be equal to RF? For example,we have 4 machines,each virtualized to

回复: 回复: tolerate how many nodes down in the cluster

2017-07-26 Thread Peng Xiao
星期三) 上午10:32 收件人: "user"; 抄送: "anujw_2...@yahoo.co.in"; 主题: 回复: 回复: tolerate how many nodes down in the cluster Thanks for the remind,we will setup a new DC as suggested. -- 原始邮件 -- 发件人: "kurt greaves";; 发送时间: 2017年7月26日(星期三) 上午1

回复: 回复: tolerate how many nodes down in the cluster

2017-07-25 Thread Peng Xiao
Thanks for the remind,we will setup a new DC as suggested. -- 原始邮件 -- 发件人: "kurt greaves";; 发送时间: 2017年7月26日(星期三) 上午10:30 收件人: "User"; 抄送: "anujw_2...@yahoo.co.in"; 主题: Re: 回复: tolerate how many nodes down in the cluster Keep i

Re: 回复: tolerate how many nodes down in the cluster

2017-07-25 Thread kurt greaves
Keep in mind that you shouldn't just enable multiple racks on an existing cluster (this will lead to massive inconsistencies). The best method is to migrate to a new DC as Brooke mentioned.​

?????? ?????? tolerate how many nodes down in the cluster

2017-07-25 Thread Peng Xiao
ng Xiao"<2535...@qq.com>; : Re: ?? tolerate how many nodes down in the cluster I've never really understood why Datastax recommends against racks. In those docs they make it out to be much more difficult than it actually is to configure and manage racks. The important thing to ke

Re: 回复: tolerate how many nodes down in the cluster

2017-07-24 Thread kurt greaves
I've never really understood why Datastax recommends against racks. In those docs they make it out to be much more difficult than it actually is to configure and manage racks. The important thing to keep in mind when using racks is that your # of racks should be equal to your RF. If you have keysp

Re: 回复: tolerate how many nodes down in the cluster

2017-07-24 Thread Brooke Thorley
nsure to ensure that racks will be distributing data > correctly and evenly. At times when clusters need immediate expansion, > racks should be the last things to worry about. > > > > > > -- 原始邮件 -- > *发件人:* "Bhuvan Rawal";; > *发送时间:* 2017年7月24日(星期

Re: 回复: tolerate how many nodes down in the cluster

2017-07-24 Thread Anuj Wadehra
发件人: "Bhuvan Rawal";;发送时间: 2017年7月24日(星期一) 晚上7:17收件人:  "user"; 主题: Re: tolerate how many nodes down in the cluster Hi Peng , This really depends on how you have configured your topology. Say if you have segregated your dc into 3 racks with 10 servers each. With RF of 3 you

回复: tolerate how many nodes down in the cluster

2017-07-24 Thread Peng Xiao
uot;Bhuvan Rawal";; 发送时间: 2017年7月24日(星期一) 晚上7:17 收件人: "user"; 主题: Re: tolerate how many nodes down in the cluster Hi Peng , This really depends on how you have configured your topology. Say if you have segregated your dc into 3 racks with 10 servers each. With RF of 3 you can s

Re: tolerate how many nodes down in the cluster

2017-07-24 Thread Bhuvan Rawal
> Hi, > > Suppose we have a 30 nodes cluster in one DC with RF=3, > how many nodes can be down?can we tolerate 10 nodes down? > it seems that we are not able to avoid the data distribution 3 replicas > in the 10 nodes?, > then we can only tolerate 1 node down even we have 30 nodes? &

tolerate how many nodes down in the cluster

2017-07-24 Thread Peng Xiao
Hi, Suppose we have a 30 nodes cluster in one DC with RF=3, how many nodes can be down?can we tolerate 10 nodes down? it seems that we are not able to avoid the data distribution 3 replicas in the 10 nodes?, then we can only tolerate 1 node down even we have 30 nodes? Could anyone please

Re: Node after restart sees other nodes down for 10 minutes

2016-07-27 Thread Paulo Motta
22 >>>> INFO [main] 2016-07-25 21:58:47,030 StorageService.java:1902 - Node >>>> ip-10-4-43-66.ec2.internal/10.4.43.66 state jump to NORMAL >>>> INFO [main] 2016-07-25 21:58:47,096 CassandraDaemon.java:644 - Waiting >>>> for gossip to settle before accep

Re: Node after restart sees other nodes down for 10 minutes

2016-07-27 Thread Farzad Panahi
t; Updating topology for /10.4.68.222 >>> INFO [GossipStage:1] 2016-07-25 21:58:47,497 Gossiper.java:1028 - Node / >>> 10.4.54.176 has restarted, now UP >>> INFO [GossipStage:1] 2016-07-25 21:58:47,544 StorageService.java:1902 - >>> Node /10.4.54.176 state jump to NOR

Re: Node after restart sees other nodes down for 10 minutes

2016-07-27 Thread Farzad Panahi
O [GossipStage:1] 2016-07-25 21:58:47,497 Gossiper.java:1028 - Node / >> 10.4.54.176 has restarted, now UP >> INFO [GossipStage:1] 2016-07-25 21:58:47,544 StorageService.java:1902 - >> Node /10.4.54.176 state jump to NORMAL >> INFO [HANDSHAKE-/10.4.54.176] 2016-07-25 21:58:47,54

Re: Node after restart sees other nodes down for 10 minutes

2016-07-27 Thread Paulo Motta
2 - > Node /10.4.54.176 state jump to NORMAL > INFO [HANDSHAKE-/10.4.54.176] 2016-07-25 21:58:47,548 > OutboundTcpConnection.java:515 - Handshaking version with /10.4.54.176 > INFO [GossipStage:1] 2016-07-25 21:58:47,594 TokenMetadata.java:429 - > Updating topology for /10.4.54.176 > I

Node after restart sees other nodes down for 10 minutes

2016-07-26 Thread Farzad Panahi
[GossipTasks:1] 2016-07-25 21:58:47,678 FailureDetector.java:287 - Not marking nodes down due to local pause of 43226235115 > 50 INFO [HANDSHAKE-/10.4.43.65] 2016-07-25 21:58:47,679 OutboundTcpConnection.java:515 - Handshaking version with /10.4.43.65 INFO [GossipStage:1] 2016-07-25 21

Re: Cassandra DC2 nodes down after increasing write requests on DC1 nodes

2014-11-16 Thread Tim Heckman
Hello Gabriel, On Sun, Nov 16, 2014 at 7:25 AM, Gabriel Menegatti wrote: > I said that load was not a big deal, because ops center shows this loads as > green, not as yellow or red at all. > > Also, our servers have many processors/threads, so I *think* this load is > not problematic. I've seen

Re: Cassandra DC2 nodes down after increasing write requests on DC1 nodes

2014-11-16 Thread Gabriel Menegatti
Hi Eric, Thanks for your reply. I said that load was not a big deal, because ops center shows this loads as green, not as yellow or red at all. Also, our servers have many processors/threads, so I *think* this load is not problematic. My assumption is that for some reason the DC2 10 nodes are

Re: Cassandra DC2 nodes down after increasing write requests on DC1 nodes

2014-11-16 Thread Eric Stevens
> load average on DC1 nodes are around 3-5 and on DC2 around 7-10 Anecdotally I can say that loads in the 7-10 range have been dangerously high. When we had a cluster running in this range, the cluster was falling behind on important tasks such as compaction, and we really struggled to successful

Cassandra DC2 nodes down after increasing write requests on DC1 nodes

2014-11-16 Thread Gabriel Menegatti
Hello, We are using Cassandra 2.1.2 in a multi dc cluster (30 servers on DC1 and 10 on DC2) with a key space replication factor of 1 on DC1 and 2 on DC2. For some reason when we increase the volume of write requests on DC1 (using ONE or LOCAL_ONE), the Cassandra java process on DC2 nodes goes dow

Re: Nodes down

2013-05-13 Thread Robert Coli
On Sat, May 11, 2013 at 6:56 AM, Rodrigo Felix wrote: > What does the first line of bin/nodetool ring output means? It has the same > token of the down node. No, it doesn't. It is displaying the token of the highest node to indicate to you that the token range is a ring, and that the first node i

Re: Nodes down

2013-05-12 Thread shashwat shriparv
On Sat, May 11, 2013 at 7:26 PM, Rodrigo Felix < rodrigofelixdealme...@gmail.com> wrote: > es the first line of bin/nodetool ring o Fix the number of mapper and reducer according to your hardware *Thanks & Regards* ∞ Shashwat Shriparv

Nodes down

2013-05-11 Thread Rodrigo Felix
I`m running YCSB and sometimes, when the load is heavy, some nodes get down. What does the first line of bin/nodetool ring output means? It has the same token of the down node. For each down node, is it expected to be shown a line like the first one (with most columns empty)? Address DC

All nodes down even though ring shows up

2011-04-14 Thread mcasandra
: http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/All-nodes-down-even-though-ring-shows-up-tp6274152p6274152.html Sent from the cassandra-u...@incubator.apache.org mailing list archive at Nabble.com.