Have you checked system logs/dmesg? I'd suspect it's an instance problem
too, maybe you'll see some relevant errors in those logs.
On 2017-07-22 07:08 (-0700), Daniel Hölbling-Inzko
wrote:
> Seeds are there to bootstrap a node for the very first time when it's has
> zero knowledge about the ring.
>
> I think I also read somewhere that seed nodes are periodically queried for
> some sanity checks and therefore one should
On 2017-07-13 07:49 (-0700), Felipe Esteves
wrote:
> Hi,
>
> I have a Cassandra 2.1 cluster running on AWS that receives high read
> loads, jumping from 100k requests to 400k requests, for example. Then it
> normalizes and later cames another high throughput.
>
> To the application, it appea
On 2017-07-20 13:23 (-0700), Roger Warner wrote:
> Hi
>
> Iâm a little dim on what multi datacenter implies in the 1 replica case.
> I know about replica recovery, how about ânode recoveryâ
>
> As I understand if there a node failure or disk crash with a single node
> cluster with
I've never really understood why Datastax recommends against racks. In
those docs they make it out to be much more difficult than it actually is
to configure and manage racks.
The important thing to keep in mind when using racks is that your # of
racks should be equal to your RF. If you have keysp
Hello Peng.
I think spending the time to set up your nodes into racks is worth it for
the benefits that it brings. With RF3 and NTS you can tolerate the loss of
a whole rack of nodes without losing QUORUM as each rack will contain a
full set of data. It makes ongoing cluster maintenance easier, a
We've seen an unusually high instance failure rate with i3's (underlying
hardware degradation). Especially with the nodes that have been around
longer (recently provisioned nodes have a more typical failure rate). I
wonder if your underlying hardware is degraded and EC2 just hasn't noticed
yet.
Just to rule out a simple problem, are you using a load balancing policy?
Hi,
let's say i have 5 nodes a DC the dynamic snitch scores are
A => '1.84106731901363'
B => '1.1386762906094'
C => '2.63620400428772',
D => '3.06495631470972',
E => '0',
Badness_threshold is set to 1,
Does this mean the node with higher value is considered as bad node right
??
If i have
Hi Peng,
Three things are important when you are evaluating fault tolerance and
availability for your cluster:
1. RF2. CL3. Topology - how data is replicated in racks.
If you assume that N nodes from ANY rack may fail at the same time, then you
can afford failure of RF-CL nodes and still be
Hi Bhuvan,
From the following link,it doesn't suggest us to use RAC and it looks
reasonable.
http://www.datastax.com/dev/blog/multi-datacenter-replication
Defining one rack for the entire cluster is the simplest and most common
implementation. Multiple racks should be avoided for the following
Hi Peng ,
This really depends on how you have configured your topology. Say if you
have segregated your dc into 3 racks with 10 servers each. With RF of 3 you
can safely assume your data to be available if one rack goes down.
But if different servers amongst the racks fail then i guess you are no
Hi,
Suppose we have a 30 nodes cluster in one DC with RF=3,
how many nodes can be down?can we tolerate 10 nodes down?
it seems that we are not able to avoid the data distribution 3 replicas in the
10 nodes?,
then we can only tolerate 1 node down even we have 30 nodes?
Could anyone please advise
13 matches
Mail list logo