You're right, they should be the same.
Next time this happens, set the log level to debug (from
StorageService jmx) on the surviving nodes and let a couple queries
fail, before restarting the 3rd (and setting level back to info).
On Sat, Dec 4, 2010 at 12:01 AM, Dan Hendry wrote:
> Doesn't consi
> - A Cassandra node (say 3) goes down (even with 24 GB of ram, OOM errors
> are the bain of my existence)
Following up on this bit; OOM should not be the status quo. Have you
tweaked JVM heap sizes to reflect your memtables sizes etc?
http://wiki.apache.org/cassandra/MemtableThresholds
--
/ P
Doesn't consistency level ALL=QUORUM at RF=2 ?
I have not had a chance to test your fix but I don't THINK this is the
issue. If it is the issue, how do consistency levels ALL and QUORUM differ
at this replication factor?
On Sat, Dec 4, 2010 at 12:03 AM, Jonathan Ellis wrote:
> I think you are r
I think you are running into
https://issues.apache.org/jira/browse/CASSANDRA-1316, where when an
inconsistency on QUORUM/ALL is discovered it always peformed the
repair at QUORUM instead of the original CL. Thus, reading at ALL you
would see the correct answer on the 2nd read but you weren't
guara
I am seeing fairly strange, behavior in my Cassandra cluster.
Setup
- 3 nodes (lets call them nodes 1 2 and 3)
- RF=2
- A set of servers (producers) which which write data to the cluster at
consistency level ONE
- A set of servers (consumers/processors) which read data from the cluster
at cons