Renoy,
Thanks. That kernel is new enough to have the patch for the infamous Linux
kernel futex bug detailed here:
https://groups.google.com/d/topic/mechanical-sympathy/QbmpZxp6C64
To answer your questions above:
What you're seeing is likely just normal behavior for Cassandra and is an
artifact o
Hi Joshua,
# uname -a
Linux cn6.chn6us1c1.cdn 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27
UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
On Fri, Aug 3, 2018 at 8:27 AM, Joshua Galbraith <
jgalbra...@newrelic.com.invalid> wrote:
> Renoy
Renoy,
Out of curiosity, which kernel version are your nodes running?
You may find this old message on the mailing list helpful:
http://mail-archives.apache.org/mod_mbox/cassandra-user/201602.mbox/%3CCAA=6j0-0vabfan3djfatoxyjwwehpdie67v2wm_u5kaqoro...@mail.gmail.com%3E
On Wed, Aug 1, 2018 at 5:3
There are two factors in terms of Cassandra that determine what's called
network topology: datacenter and rack.
rack - it's not necessarily a physical rack, it's rather a single point of
failure. For example, in case of AWS one availability zone is usually chosen to
be a Cassandra rack.
datace
We are using 3 node Cassandra cluster in one data center.
For our keyspaces as suggested in best practices we are using NetworkTopology
for replication strategy using the GossipingPropertyFileSnitch.
For Read/Write consistency we are using as QUORUM.
In majority of cases when users use NetworkTo