Good morning,
Alain, thank you so much. This is exactly what I needed.
In my test I had a node which had for whatever reason the directory containing
my data corrupted. I keep in a separate folder my snapshots.
Here are the steps I took to recover my sick node:
0) Cassandra is stopped on my si
Hi
If this problem was because of data inconsistencies, it should have been
very rare. However, I am seeing this happen very often (almost 50 % of the
times). Statistically, this should be very unlikely if the number of
replication failures are small.
On Thu, Jun 25, 2015 at 11:55 PM, Tyler Hobbs
On Thu, Jun 25, 2015 at 1:00 PM, Robert Coli wrote:
> [1] or read repair set to 100% combined with a full scan of all data...
> which no one does...
And this is only true if "full scan" means reading every partition
individually. Reads of partition ranges (or a range slice, in old Thrift
terms
On Thu, Jun 25, 2015 at 5:14 AM, Jack Krupansky
wrote:
> Hinted handoff - which is what provides eventual consistency - can time
> out and be discarded/lost if the cluster is under heavy load or encounters
> poor network connectivity or nodes are down for too long, which is what
> requires runnin
-- Forwarded message --
From: David Aronchick
Date: Thu, Jun 25, 2015 at 10:33 AM
Subject: Issue when node goes away?
To: cassandra-u...@cassandra.apache.org
I posted this to StackOverflow with no response:
http://stackoverflow.com/questions/30744486/how-to-handle-failures-in-
I think that your benchmark will soon be relevant :).
Do not hesitate to share your exact use case (configurations, size & number
of request, results - latencies / throughput / errors if any)
The only benchmark I have seen so far is the one made by datastax I already
shared to you (
http://www.da
Yes, our clients didn't specify the port so they are using 9042 by default.
On Thu, Jun 25, 2015 at 9:23 AM, Alain RODRIGUEZ wrote:
> Hi Zhiyan,
>
> 2 - RF 2 will improve overall performance, but not about the result 2.0.*
> vs 2.1.*. Same comment about adding 3 nodes. Yet Cassandra is supposed
Hi Jean,
Answers in line to be sure to be exhaustive:
- how can I restore the data directory structure in order to copy my
snapshots at the right position?
--> making a script to do it and testing it I would say. basically under
any table repo you have a "snapshots/snapshot_name" directory (snaps
Hi Zhiyan,
2 - RF 2 will improve overall performance, but not about the result 2.0.*
vs 2.1.*. Same comment about adding 3 nodes. Yet Cassandra is supposed to
be linearly scalable, so...
3 - I guess this was the first thing to do. You did not answered about heap
size. One of the main differences b
Quorum will give you strong consistency, but if you're using RF=2 you're
going to have issues, as Quorum on RF=2 = CL=ALL. You'll want to use RF=3
to make sure you can tolerate failure of a node, otherwise a single node
going down will result in unanswerable queries.
On Thu, Jun 25, 2015 at 6:37
Thanks Alain,
for 2, We tried CL one but the improvement is small. Will try RF 2 and see.
Maybe adding 3 more boxes will help too.
for 3, we changed key cache back to default (100MB) and it helped
improving the perf but still worse than 2.0.14. We also noticed that hit
rate grew slower than 2.0.1
Hi,
I am testing snapshot restore procedures in case of a major catastrophe on our
cluster. I’m using Cassandra 2.1.7 with RF:3
The scenario that I am trying to solve is how to quickly get one node back to
work after its disk failed and lost all its data assuming that the only thing I
have is
Regards, Aditya. I´m agree with Jack here. In our tests here with read
and writes in Cassandra (version 2.1.5), we played with several CL, and
QUORUM is the best for us.
On 25/06/15 08:14, Jack Krupansky wrote:
Hinted handoff - which is what provides eventual consistency - can
time out and be
Hinted handoff - which is what provides eventual consistency - can time out
and be discarded/lost if the cluster is under heavy load or encounters poor
network connectivity or nodes are down for too long, which is what requires
running repair. That's why quorum is the recommended cl for both write
Excepted if the node failed to take the write and you have no Hinted
Handoff (or for some reason they also failed).
Have you tried at QUORUM or even ALL, this would force a synchronous read
repair. You can also try to repair directly.
Hope this will help,
C*heers,
Alain
2015-06-25 13:34 GMT+02
I am using consistency one for both. However, the writes had happened a few
days before, so it does not look like an issue of eventual consistency.
On Thu, Jun 25, 2015 at 3:59 PM, Perica Milošević <
perica.milose...@gmail.com> wrote:
> Which ConsistencyLevel do you use for writing and reading of
On 06/18/2015 11:13 AM, Jonathan Ballet wrote:
Hi,
I'm looking for information on how to correctly deploy an OpsCenter
instance behind a HTTP(S) proxy.
I have a running instance of OpsCenter 5.1 reachable at
http://opscenter:/opscenter/ but I would like to be able to
serve this kind of tool
Which ConsistencyLevel do you use for writing and reading of the data?
Cheers,
Perica
On Thu, Jun 25, 2015 at 12:12 PM, Aditya Shetty
wrote:
> Hi
>
> I have a 3 node cassandra cluster with a replication factor of 2. I have a
> basic column family which I am reading by primary key. Here is the C
Hi
I have a 3 node cassandra cluster with a replication factor of 2. I have a
basic column family which I am reading by primary key. Here is the CF
structure:
CREATE TABLE reviews_platform.object_stats (
object_owner_id int,
object_type int,
object_id text,
num_of_reviews int,
19 matches
Mail list logo