Btw.: I created an issue for that some months ago
https://issues.apache.org/jira/browse/CASSANDRA-12991
2017-04-01 22:25 GMT+02:00 Roland Otta :
> thank you both chris and benjamin for taking time to clarify that.
>
>
> On Sat, 2017-04-01 at 21:17 +0200, benjamin roth wrote:
>
> Tl;Dr: there are
thank you both chris and benjamin for taking time to clarify that.
On Sat, 2017-04-01 at 21:17 +0200, benjamin roth wrote:
Tl;Dr: there are race conditions in a repair and it is not trivial to fix them.
So we rather stay with these race conditions. Actually they don't really hurt.
The worst cas
Tl;Dr: there are race conditions in a repair and it is not trivial to fix
them. So we rather stay with these race conditions. Actually they don't
really hurt. The worst case is that ranges are repaired that don't really
need a repair.
Am 01.04.2017 21:14 schrieb "Chris Lohfink" :
> Repairs do not
I think your way to communicate needs work. No one forces you to answer on
questions.
Am 01.04.2017 21:09 schrieb "daemeon reiydelle" :
> What you are doing is correctly going to result in this, IF there is
> substantial backlog/network/disk or whatever pressure.
>
> What do you think will happen
Repairs do not have an ability to instantly build a perfect view of its
data between your 3 nodes at an exact time. When a piece of data is written
there is a delay between when they applied between the nodes, even if its
just 500ms. So if a request to read the data and build the merkle tree of
the
What you are doing is correctly going to result in this, IF there is
substantial backlog/network/disk or whatever pressure.
What do you think will happen when you write with a replication factor
greater than consistency level of write? Perhaps your mental model of how
C* works needs work?
*.
Hi,
did you try to read data with consistency ALL immediately after write with
consistency ONE? Does it succeed?
Best regards, Vladimir Yudovin,
Winguzone - Cloud Cassandra Hosting
On Thu, 30 Mar 2017 04:22:28 -0400 Roland Otta
wrote
hi,
hi,
we see the following behaviour in our environment:
cluster consists of 6 nodes (cassandra version 3.0.7). keyspace has a
replication factor 3.
clients are writing data to the keyspace with consistency one.
we are doing parallel, incremental repairs with cassandra reaper.
even if a repair ju