Re: A Single Dropped Node Fails Entire Read Queries

2017-03-22 Thread Shalom Sagges
Upgrading to 3.0.12 solved the issue. Thanks a lot for the help Joel! Shalom Sagges DBA T: +972-74-700-4035 We Create Meaningful Connections

Re: A Single Dropped Node Fails Entire Read Queries

2017-03-14 Thread Shalom Sagges
Thanks a lot Joel! I'll go ahead and upgrade. Thanks again! Shalom Sagges DBA T: +972-74-700-4035 We Create Meaningful Connections On

Re: A Single Dropped Node Fails Entire Read Queries

2017-03-13 Thread Joel Knighton
It's possible that you're hitting https://issues.apache.org/jira/browse/CASSANDRA-13009 . In (simplified) summary, the read query picks the right number of endpoints fairly early in its execution. Because the down node has not been detected as down yet, it may be one of the nodes. When this node d

Re: A Single Dropped Node Fails Entire Read Queries

2017-03-13 Thread Shalom Sagges
Just some more info, I've tried the same scenario on 2.0.14 and 2.1.15 and didn't encounter such errors. What I did find is that the timeout errors appear only until the node is discovered as "DN" in nodetool status. Once the node is in DN status, the errors stop and the data is retrieved. Could t

Re: A Single Dropped Node Fails Entire Read Queries

2017-03-12 Thread Shalom Sagges
Hi Michael, If a node suddenly fails, and there are other replicas that can still satisfy the consistency level, shouldn't the request succeed regardless of the failed node? Thanks! Shalom Sagges DBA T: +972-74-700-4035

Re: A Single Dropped Node Fails Entire Read Queries

2017-03-10 Thread Michael Shuler
I may be mistaken on the exact configuration option for the timeout you're hitting, but I believe this may be the general `request_timeout_in_ms: 1` in conf/cassandra.yaml. A reasonable timeout for a "node down" discovery/processing is needed to prevent random flapping of nodes with a super sh

Re: A Single Dropped Node Fails Entire Read Queries

2017-03-10 Thread Shalom Sagges
Hi daniel, I don't think that's a network issue, because ~10 seconds after the node stopped, the queries were successful again without any timeout issues. Thanks! Shalom Sagges DBA T: +972-74-700-4035

Re: A Single Dropped Node Fails Entire Read Queries

2017-03-10 Thread Daniel Hölbling-Inzko
Could there be network issues in connecting between the nodes? If node a gets To be the query coordinator but can't reach b and c is obviously down it won't get a quorum. Greetings Shalom Sagges schrieb am Fr. 10. März 2017 um 10:55: > @Ryan, my keyspace replication settings are as follows: > CR

Re: A Single Dropped Node Fails Entire Read Queries

2017-03-10 Thread Shalom Sagges
@Ryan, my keyspace replication settings are as follows: CREATE KEYSPACE mykeyspace WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '3', 'DC2: '3', 'DC3': '3'} AND durable_writes = true; CREATE TABLE mykeyspace.test ( column1 text, column2 text, column3 text, PRIMARY

Re: A Single Dropped Node Fails Entire Read Queries

2017-03-10 Thread Daniel Hölbling-Inzko
The LOCAL_QUORUM works on the available replicas in the dc. So if your replication factor is 2 and you have 10 nodes you can still only loose 1. With a replication factor of 3 you can loose one node and still satisfy the query. Ryan Svihla schrieb am Do. 9. März 2017 um 18:09: > whats your keyspa

Re: A Single Dropped Node Fails Entire Read Queries

2017-03-09 Thread Ryan Svihla
whats your keyspace replication settings and what's your query? On Thu, Mar 9, 2017 at 9:32 AM, Shalom Sagges wrote: > Hi Cassandra Users, > > I hope someone could help me understand the following scenario: > > Version: 3.0.9 > 3 nodes per DC > 3 DCs in the cluster. > Consistency Local_Quorum. >

A Single Dropped Node Fails Entire Read Queries

2017-03-09 Thread Shalom Sagges
Hi Cassandra Users, I hope someone could help me understand the following scenario: Version: 3.0.9 3 nodes per DC 3 DCs in the cluster. Consistency Local_Quorum. I did a small resiliency test and dropped a node to check the availability of the data. What I assumed would happen is nothing at all.