Re: 1 node doing compaction all the time in 6-node cluster (C* 2.2.8)

2017-07-24 Thread kurt greaves
Have you checked system logs/dmesg? I'd suspect it's an instance problem too, maybe you'll see some relevant errors in those logs. ​

Re: Understanding gossip and seeds

2017-07-24 Thread Jeff Jirsa
On 2017-07-22 07:08 (-0700), Daniel Hölbling-Inzko wrote: > Seeds are there to bootstrap a node for the very first time when it's has > zero knowledge about the ring. > > I think I also read somewhere that seed nodes are periodically queried for > some sanity checks and therefore one should

Re: Cassandra seems slow when having many read operations

2017-07-24 Thread Jeff Jirsa
On 2017-07-13 07:49 (-0700), Felipe Esteves wrote: > Hi, > > I have a Cassandra 2.1 cluster running on AWS that receives high read > loads, jumping from 100k requests to 400k requests, for example. Then it > normalizes and later cames another high throughput. > > To the application, it appea

Re: Multi datacenter node loss

2017-07-24 Thread Jeff Jirsa
On 2017-07-20 13:23 (-0700), Roger Warner wrote: > Hi > > I’m a little dim on what multi datacenter implies in the 1 replica case. > I know about replica recovery, how about “node recovery” > > As I understand if there a node failure or disk crash with a single node > cluster with

Re: 回复: tolerate how many nodes down in the cluster

2017-07-24 Thread kurt greaves
I've never really understood why Datastax recommends against racks. In those docs they make it out to be much more difficult than it actually is to configure and manage racks. The important thing to keep in mind when using racks is that your # of racks should be equal to your RF. If you have keysp

Re: 回复: tolerate how many nodes down in the cluster

2017-07-24 Thread Brooke Thorley
Hello Peng. I think spending the time to set up your nodes into racks is worth it for the benefits that it brings. With RF3 and NTS you can tolerate the loss of a whole rack of nodes without losing QUORUM as each rack will contain a full set of data. It makes ongoing cluster maintenance easier, a

Re: 1 node doing compaction all the time in 6-node cluster (C* 2.2.8)

2017-07-24 Thread Eric Stevens
We've seen an unusually high instance failure rate with i3's (underlying hardware degradation). Especially with the nodes that have been around longer (recently provisioned nodes have a more typical failure rate). I wonder if your underlying hardware is degraded and EC2 just hasn't noticed yet.

Re: 1 node doing compaction all the time in 6-node cluster (C* 2.2.8)

2017-07-24 Thread kurt greaves
Just to rule out a simple problem, are you using a load balancing policy?

Understanding Dynamic Snitch Scores

2017-07-24 Thread Pranay akula
Hi, let's say i have 5 nodes a DC the dynamic snitch scores are A => '1.84106731901363' B => '1.1386762906094' C => '2.63620400428772', D => '3.06495631470972', E => '0', Badness_threshold is set to 1, Does this mean the node with higher value is considered as bad node right ?? If i have

Re: 回复: tolerate how many nodes down in the cluster

2017-07-24 Thread Anuj Wadehra
Hi Peng,  Three things are important when you are evaluating fault tolerance and availability for your cluster: 1. RF2. CL3. Topology -  how data is replicated in racks.  If you assume that N  nodes from ANY rack may fail at the same time,  then you can afford failure of RF-CL nodes and still be

回复: tolerate how many nodes down in the cluster

2017-07-24 Thread Peng Xiao
Hi Bhuvan, From the following link,it doesn't suggest us to use RAC and it looks reasonable. http://www.datastax.com/dev/blog/multi-datacenter-replication Defining one rack for the entire cluster is the simplest and most common implementation. Multiple racks should be avoided for the following

Re: tolerate how many nodes down in the cluster

2017-07-24 Thread Bhuvan Rawal
Hi Peng , This really depends on how you have configured your topology. Say if you have segregated your dc into 3 racks with 10 servers each. With RF of 3 you can safely assume your data to be available if one rack goes down. But if different servers amongst the racks fail then i guess you are no

tolerate how many nodes down in the cluster

2017-07-24 Thread Peng Xiao
Hi, Suppose we have a 30 nodes cluster in one DC with RF=3, how many nodes can be down?can we tolerate 10 nodes down? it seems that we are not able to avoid the data distribution 3 replicas in the 10 nodes?, then we can only tolerate 1 node down even we have 30 nodes? Could anyone please advise