Re: Replication factor, LOCAL_QUORUM write consistency and materialized views

2024-05-17 Thread Gábor Auth
Hi, On Fri, May 17, 2024 at 6:18 PM Jon Haddad wrote: > I strongly suggest you don't use materialized views at all. There are > edge cases that in my opinion make them unsuitable for production, both in > terms of cluster stability as well as data integrity. > Oh, there is already an open and

Re: Replication factor, LOCAL_QUORUM write consistency and materialized views

2024-05-17 Thread Gábor Auth
(~100 MB at all), the keyspace's >> replication factor is 3, everything is works fine... except: if I restart a >> node, I get a lot of errors with materialized views and consistency level >> ONE, but only for those tables for which there is more than one >> materialize

Re: Replication factor, LOCAL_QUORUM write consistency and materialized views

2024-05-17 Thread Jon Haddad
a lot of errors with materialized views and consistency level > ONE, but only for those tables for which there is more than one > materialized view. > > Tables without materialized view don't have it, works fine. > Tables that have it, but only one materialized view, also work

Replication factor, LOCAL_QUORUM write consistency and materialized views

2024-05-17 Thread Gábor Auth
a lot of errors with materialized views and consistency level ONE, but only for those tables for which there is more than one materialized view. Tables without materialized view don't have it, works fine. Tables that have it, but only one materialized view, also works fine. But, a table with

Re: Question about commit consistency level for Lightweight Transactions in Paxos v2

2024-03-11 Thread Weng, Justin via user
So for upgrading Paxos to v2, the non-serial consistency level should be set to ANY or LOCAL_QUORUM, and the serial consistency level should still be SERIAL or LOCAL_SERIAL. Got it, thanks! From: Laxmikant Upadhyay Date: Tuesday, 12 March 2024 at 7:33 am To: user@cassandra.apache.org Cc: Weng

Re: Question about commit consistency level for Lightweight Transactions in Paxos v2

2024-03-11 Thread Laxmikant Upadhyay
You need to set both in case of lwt. your regular non -serial consistency level will only applied during commit phase of lwt. On Wed, 6 Mar, 2024, 03:30 Weng, Justin via user, wrote: > Hi Cassandra Community, > > > > I’ve been investigating Cassandra Paxos v2 (as implemented in

Question about commit consistency level for Lightweight Transactions in Paxos v2

2024-03-05 Thread Weng, Justin via user
commit consistency level for LWT after upgrading Paxos. In cqlsh<https://docs.datastax.com/en/cql-oss/3.3/cql/cql_reference/cqlshSerialConsistency.html>, gocql<https://github.com/gocql/gocql/blob/master/session.go#L1247> and Python driver<https://docs.datastax.com/en/developer/p

Re: Wrong Consistency level seems to be used

2022-07-21 Thread Jim Shaw
My experience to debug this kind of issue is to turn on trace. The nice thing in cassandra is: you can turn on trace only on 1 node and with a small percentage, i.e. nodetool settraceprobability 0.05 --- only run on 1 node. Hope it helps. Regards, James On Thu, Jul 21, 2022 at 2:50 PM Tolbert

Re: Wrong Consistency level seems to be used

2022-07-21 Thread Tolbert, Andy
I'd bet the JIRA that Paul is pointing to is likely what's happening here. I'd look for read repair errors in your system logs or in your metrics (if you have easy access to them). There are operations that can happen during the course of a query being executed that may happen at different CLs,

Re: Wrong Consistency level seems to be used

2022-07-21 Thread Paul Chandler
see if that ticket applies to your experience. Thanks Paul Chandler > On 21 Jul 2022, at 15:12, pwozniak wrote: > > Yes, I did it. Nothing like this in my code. Consistency level is set only in > one place (shown below). > > > > On 7/21/22 4:08 PM, manish khandelw

Re: Wrong Consistency level seems to be used

2022-07-21 Thread pwozniak
Yes, I did it. Nothing like this in my code. Consistency level is set only in one place (shown below). On 7/21/22 4:08 PM, manish khandelwal wrote: Consistency can also be set on a statement basis. So please check in your code that you might be setting consistency 'ALL' for some que

Re: Wrong Consistency level seems to be used

2022-07-21 Thread Bowen Song via user
It doesn't make any sense to see consistency level ALL if the code is not explicitly using it. My best guess is somewhere in the code the consistency level was overridden. On 21/07/2022 14:52, pwozniak wrote: Hi, we have the following code (java driver): cluster =Cluster.bu

Re: Wrong Consistency level seems to be used

2022-07-21 Thread manish khandelwal
Consistency can also be set on a statement basis. So please check in your code that you might be setting consistency 'ALL' for some queries. On Thu, Jul 21, 2022 at 7:23 PM pwozniak wrote: > Hi, > > we have the following code (java driver): > > cluster = Cluster.b

Wrong Consistency level seems to be used

2022-07-21 Thread pwozniak
)) .withTimestampGenerator(new AtomicMonotonicTimestampGenerator()) .withCredentials(userName, password).build(); session =cluster.connect(keyspaceName); where ConsistencyLevel.QUORUM is our default consistency level. But we keep receiving the following exceptions

Re: Validate data consistency after nodetool rebuild

2020-07-09 Thread Jeff Jirsa
Not in 3.11, though 4.0 adds preview repair which can sorta do this if you're also running incremental repair. Just run nodetool repair, use subranges if needed. If you stream data, they're out of sync. If you don't stream data, they're in sync. On Thu, Jul 9, 2020 at 3:16 PM Jai Bheemsen Rao D

Validate data consistency after nodetool rebuild

2020-07-09 Thread Jai Bheemsen Rao Dhanwada
Hello, I am trying to expand my C* cluster to a new region, followed by keyspace expansion and nodetool rebuild -- sourceDC. Once the rebuild process is complete, is there a way to identify if all the data between two regions is in sync? Since the data size is large, I cannot run select count(*).

Re: Consistency level shows as null in Java driver

2020-06-12 Thread manish khandelwal
This is how getConsistencyLevel method is implemented. This method returns consistencylevel of the query or null if no consistency level has been set using setConsistencyLevel. Regards Manish On Fri, Jun 12, 2020 at 3:43 PM Manu Chadha wrote: > Hi > > In my Cassandra Java driver c

Consistency level shows as null in Java driver

2020-06-12 Thread Manu Chadha
Hi In my Cassandra Java driver code, I am creating a query and then I print the consistency level of the query val whereClause = whereConditions(tablename, id); cassandraRepositoryLogger.trace("getRowsByPartitionKeyId: looking in table "+tablename+" wit

Re: Issue with parallel LWT writes (and reads with serial consistency) overwriting data

2020-06-06 Thread Jeff Jirsa
Cassandra. > > I believe swites from 2 different clients at (essentially precisely) the same > time on the same table & row have no knowledge of one another. Each unique > LWT did what was asked of it, read the data and wrote as requested. Last > write won. This is the definition

Re: Issue with parallel LWT writes (and reads with serial consistency) overwriting data

2020-06-06 Thread Michael Shuler
the definition of eventual consistency, and you found an edge case for LWT usage. There may be other suggestions, but I think the simplest method to get as close to a guarantee that your LWT functions as you wish, would be to take the parallel access out of the equation. Create a canonical user

Issue with parallel LWT writes (and reads with serial consistency) overwriting data

2020-06-05 Thread Thiranjith Weerasinghe
update from  app1 (running on node1 succeeded - i.e. ResultSet#wasApplied returns true). However, when app2 (on node2 reads the data, it is getting stale data before  app1 updated it). I'd like to know why this is happening because LTW with serial consistency should prevent this ty

Re: Consistency with Datacenter switch

2020-03-15 Thread manish khandelwal
Yes Jeff, want to achieve the same ( *You’re trying to go local quorum in one dc to local quorum in the other dc without losing any writes*) Thanks for your quick response. Regards Manish On Mon, Mar 16, 2020 at 10:58 AM Jeff Jirsa wrote: > > You’re trying to go local quorum in one dc to lo

Re: Consistency with Datacenter switch

2020-03-15 Thread Jeff Jirsa
You’re trying to go local quorum in one dc to local quorum in the other dc without losing any writes? The easiest way to do this strictly correctly is to take the latency hit and do quorum while you run repair, then you can switch to local quorum on the other side. A few more notes inline

Consistency with Datacenter switch

2020-03-15 Thread manish khandelwal
While switching over datacenters, there is a chance of mutation drop because of which inconsistency may occur. To avoid inconsistency we can do following : Monitor and if require then run repair 1. Monitor tpstats in all nodes. If dropped message count is 0, it can be inferred no mutation

Re: How to assure data consistency in switch over to standby dc

2020-01-16 Thread Oleksandr Shulgin
On Thu, Jan 16, 2020 at 3:18 PM Laxmikant Upadhyay wrote: > > You are right, that will solve the problem. but unfortunately i won't be > able to meet my sla with write each quorum . I am using local quorum for > both read and write. > Any other way ? > Is you read SLO more sensitive than write S

Re: How to assure data consistency in switch over to standby dc

2020-01-16 Thread Jean Carlo
Hello Laxmiant, your application has to deal with eventually consistency if you are using cassandra. Ensure to have R + W > RF And have the repairs runing periodically. This is the best way to be the most cosistent and coherent Jean Carlo "The best way to predict the future is to i

Re: How to assure data consistency in switch over to standby dc

2020-01-16 Thread Laxmikant Upadhyay
ctive DC (let's say using local_quorum). >> >> you have "to switch" your clients without any issues since your writes >> are replicated on all DC. >> --> that is not true because there is a chance of mutation drop. (Hints, >> read repair may

Re: How to assure data consistency in switch over to standby dc

2020-01-16 Thread Oleksandr Shulgin
u have "to switch" your clients without any issues since your writes are > replicated on all DC. > --> that is not true because there is a chance of mutation drop. (Hints, > read repair may help to some extent but data consistency is not guaranteed > unless you run anti- ent

Re: How to assure data consistency in switch over to standby dc

2020-01-16 Thread Laxmikant Upadhyay
cated on all DC. --> that is not true because there is a chance of mutation drop. (Hints, read repair may help to some extent but data consistency is not guaranteed unless you run anti- entropy repair ) On Thu, Jan 16, 2020, 3:45 PM Ahmed Eljami wrote: > Hello, > > What do you m

Re: How to assure data consistency in switch over to standby dc

2020-01-16 Thread Ahmed Eljami
> I am thinking of below approaches : > > 1.Before switching run the repair (although it assure consistency mostly > but repair itself may take long time to complete) > > 2. Monitor the dropped message bean : If no message dropped since last > successful repair then it i

How to assure data consistency in switch over to standby dc

2020-01-16 Thread Laxmikant Upadhyay
consistency mostly but repair itself may take long time to complete) 2. Monitor the dropped message bean : If no message dropped since last successful repair then it is good to switch without running repair. 3. Monitor the hints backlog (files in hint directory), if no backlog then it is good to

Re: how to change a write's and a read's consistency level separately in cqlsh?

2019-07-01 Thread Oleksandr Shulgin
On Sat, Jun 29, 2019 at 6:19 AM Nimbus Lin wrote: > > On the 2nd question, would you like to tell me how to change a > write's and a read's consistency level separately in cqlsh? > Not that I know of special syntax for that, but you may add an explicit "CONSIST

how to change a write's and a read's consistency level separately in cqlsh?

2019-06-28 Thread Nimbus Lin
n Jconsole latter. On the 2nd question, would you like to tell me how to change a write's and a read's consistency level separately in cqlsh? Otherwise, how the document's R+W>Replicator to realize to guarantee a strong consistency write and read? Thank you! Si

Re: Necessary consistency level for LWT writes

2019-05-23 Thread Craig Pastro
t; > > > Thank you for your response! > > > > Hmm, my understanding is slightly different I think. Please let me try > to explain one situation and let me know what you think. > > > > 1. Do a LWT write with serial_consistency = SERIAL (default) and > cons

Re: Necessary consistency level for LWT writes

2019-05-23 Thread Hiroyuki Yamada
rent I think. Please let me try to > explain one situation and let me know what you think. > > 1. Do a LWT write with serial_consistency = SERIAL (default) and consistency > = ONE. > 2. LWT starts its Paxos phase and has communicated with a quorum of nodes > 3. At this point a read

Re: Necessary consistency level for LWT writes

2019-05-23 Thread Craig Pastro
Dear Hiro, Thank you for your response! Hmm, my understanding is slightly different I think. Please let me try to explain one situation and let me know what you think. 1. Do a LWT write with serial_consistency = SERIAL (default) and consistency = ONE. 2. LWT starts its Paxos phase and has

Re: Necessary consistency level for LWT writes

2019-05-23 Thread Hiroyuki Yamada
Hi Craig, I'm not 100 % sure about some corner cases, but I'm sure that LWT should be used with the following consistency levels usually. LWT write: serial_consistency_level: SERIAL consistency_level: QUORUM LWT read: consistency_level: SERIAL (It's a bit weird and mis-leading a

Necessary consistency level for LWT writes

2019-05-22 Thread Craig Pastro
Hello! I am trying to understand the consistency level (not serial consistency) required for LWTs. Basically what I am trying to understand is that if a consistency level of ONE is enough for a LWT write operation if I do my read with a consistency level of SERIAL? It would seem so based on what

Re: Is There a Way To Proactively Monitor Reads Returning No Data Due to Consistency Level?

2019-05-07 Thread Jeff Jirsa
Short answer is no, because missing consistency isn’t an error and there’s no way to know you’ve missed data without reading at ALL, and if it were ok to read at ALL you’d already be doing it (it’s not ok for most apps). > On May 7, 2019, at 8:05 AM, Fd Habash wrote: > > Typicall

Is There a Way To Proactively Monitor Reads Returning No Data Due to Consistency Level?

2019-05-07 Thread Fd Habash
Typically, when a read is submitted to C*, it may complete with … 1. No errors & returns expected data 2. Errors out with UnavailableException 3. No error & returns zero rows on first attempt, but returned on subsequent runs. The third scenario happens as a result of cluster entropy specially d

Re: Bootstrapping to Replace a Dead Node vs. Adding a New Node: Consistency Guarantees

2019-05-01 Thread Fred Habash
s://www.instaclustr.com/platform/ > > > > > > > > > > > > *From: *Fd Habash > *Reply-To: *"user@cassandra.apache.org" > *Date: *Wednesday, 1 May 2019 at 06:18 > *To: *"user@cassandra.apache.org" > *Subject: *Bootstrapping to Replace a

Re: Bootstrapping to Replace a Dead Node vs. Adding a New Node: Consistency Guarantees

2019-05-01 Thread Fred Habash
nesday, 1 May 2019 at 06:18 > To: "user@cassandra.apache.org" > Subject: Bootstrapping to Replace a Dead Node vs. Adding a New Node: > Consistency Guarantees > > Reviewing the documentation & based on my testing, using C* 2.2.8, I was not > able to extend the

Re: Bootstrapping to Replace a Dead Node vs. Adding a New Node: Consistency Guarantees

2019-04-30 Thread Alok Dwivedi
/platform/ From: Fd Habash Reply-To: "user@cassandra.apache.org" Date: Wednesday, 1 May 2019 at 06:18 To: "user@cassandra.apache.org" Subject: Bootstrapping to Replace a Dead Node vs. Adding a New Node: Consistency Guarantees Reviewing the documentation & based on my

Bootstrapping to Replace a Dead Node vs. Adding a New Node: Consistency Guarantees

2019-04-30 Thread Fd Habash
Reviewing the documentation & based on my testing, using C* 2.2.8, I was not able to extend the cluster by adding multiple nodes simultaneously. I got an error message … Other bootstrapping/leaving/moving nodes detected, cannot bootstrap while cassandra.consistent.rangemovement is true I unde

Re: [EXTERNAL] Re: Getting Consistency level TWO when it is requested LOCAL_ONE

2019-04-12 Thread Jean Carlo
issues.apache.org/jira/browse/CASSANDRA-9620 that says ' *Writing > the batch log will always be done using CL ONE.*' Contradict what I > understood from datastax's doc > > Yes I understood batches are not for speed. Still we are using it for a > consistency need. > &

Re: [EXTERNAL] Re: Getting Consistency level TWO when it is requested LOCAL_ONE

2019-04-12 Thread Jean Carlo
e.org/jira/browse/CASSANDRA-9620 that says ' *Writing the batch log will always be done using CL ONE.*' Contradict what I understood from datastax's doc Yes I understood batches are not for speed. Still we are using it for a consistency need. @Mahesh Yes we do set the consistency l

RE: [EXTERNAL] Re: Getting Consistency level TWO when it is requested LOCAL_ONE

2019-04-11 Thread Durity, Sean R
: [EXTERNAL] Re: Getting Consistency level TWO when it is requested LOCAL_ONE Hi Jean, I want to understand how you are setting the write consistency level as LOCAL ONE. That is with every query you mentioning consistency level or you have set the spring cassandra config with provided

Re: Getting Consistency level TWO when it is requested LOCAL_ONE

2019-04-11 Thread Mahesh Daksha
Hi Jean, I want to understand how you are setting the write consistency level as LOCAL ONE. That is with every query you mentioning consistency level or you have set the spring cassandra config with provided consistency level. Like this: cluster.setQueryOptions(new QueryOptions

Getting Consistency level TWO when it is requested LOCAL_ONE

2019-04-11 Thread Jean Carlo
Hello everyone, I have a case where the developers are using spring data framework for Cassandra. We are writing batches setting consistency level at LOCAL_ONE but we got a timeout like this *Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException: Cassandra timeout during BATCH_LOG

Re: Error during truncate: Cannot achieve consistency level ALL , how to fix it

2018-09-27 Thread Alain RODRIGUEZ
SCRIBE KEYSPACE system_auth;" Or to check them all: cqlsh -e "DESCRIBE KEYSPACES;" I don't know what's wrong exactly, but your application is truncating with a consistency level of 'ALL', meaning all the replicas must be up for your application to work. Le mer. 19 sept. 2

Re: Error during truncate: Cannot achieve consistency level ALL , how to fix it

2018-09-19 Thread sha p
*To:* user@cassandra.apache.org *Subject:* Error during truncate: Cannot achieve consistency level ALL , how to fix it Hi All, I am new to Cassandra. Following below link https://grokonez.com/spring-framework/spring-data/start-spring-data-cassandra-springboot#III_Sourcec

RE: Error during truncate: Cannot achieve consistency level ALL , how to fix it

2018-09-19 Thread Jonathan Baynes
What RF is your system_auth keyspace? If its one, match it to the user keyspace, and restart the node. From: sha p [mailto:shatestt...@gmail.com] Sent: 19 September 2018 11:49 To: user@cassandra.apache.org Subject: Error during truncate: Cannot achieve consistency level ALL , how to fix it Hi

Error during truncate: Cannot achieve consistency level ALL , how to fix it

2018-09-19 Thread sha p
t with RF = 2 , but when I run >>> this application from above source code bellow error is thrown >>> Caused by: com.datastax.driver.core.exceptions.TruncateException: Error >>> during truncate: Cannot achieve consistency level ALL """ >>> >>> >>> What wrong i am doing here ..How to fix it ? Plz help me. >>> >>> Regards, >>> Shyam >>> >>

Re: Tuning Replication Factor - All, Consistency ONE

2018-07-11 Thread Jürgen Albersdorfer
ssandra, I think the optimal approach >> for my use-case would be to replicate the data on *ALL* nodes possible, >> but require reads to only have a consistency level of one. So, in the case >> that a node goes down, we can still read/write to other nodes. It is not >>

Re: Tuning Replication Factor - All, Consistency ONE

2018-07-10 Thread Jeff Jirsa
r my use-case would be to replicate the data on *ALL* nodes possible, > but require reads to only have a consistency level of one. So, in the case > that a node goes down, we can still read/write to other nodes. It is not > very important that a read be unanimously agreed upon, as long a

Tuning Replication Factor - All, Consistency ONE

2018-07-10 Thread Code Wiget
replicate the data on ALL nodes possible, but require reads to only have a consistency level of one. So, in the case that a node goes down, we can still read/write to other nodes. It is not very important that a read be unanimously agreed upon, as long as Cassandra is eventually consistent, within

Re: data consistency without using nodetool repair

2018-06-09 Thread Jeff Jirsa
t i'm short of resources) > and WCl=ONE and RCL=ONE in a cluster of 10 nodes in a insert-only scenario. > The problem: i dont want to use nodetool repair because it would put hige > load on my cluster for a long time, but also i need data consistency > and fault tolerance in a

Re: data consistency without using nodetool repair

2018-06-09 Thread onmstester onmstester
ould put hige load on my cluster for a long time, but also i need data consistency and fault tolerance in a way that: if one of my nodes fails: 1. there would be no single record data loss This requires write > 1 2. write/read of data would be continued with no problem This

Re: data consistency without using nodetool repair

2018-06-09 Thread Jeff Jirsa
odetool repair because it would put hige > load on my cluster for a long time, but also i need data consistency > and fault tolerance in a way that: > if one of my nodes fails: > 1. there would be no single record data loss This requires write > 1 > 2. write/read of data would be

data consistency without using nodetool repair

2018-06-09 Thread onmstester onmstester
need data consistency and fault tolerance in a way that: if one of my nodes fails: 1. there would be no single record data loss 2. write/read of data would be continued with no problem I know that current config won't satisfy No.1, so changed the Write Consistensy Level to ALL and to satisfy No.

Re: read repair with consistency one

2018-04-25 Thread Grzegorz Pietrusza
Hi Ben Thanks a lot. From my analysis of the code it looks like you are right. When global read repair kicks in all live endpoints are queried for data, regardless of consistency level. Only EACH_QUORUM is treated differently. Cheers Grzegorz 2018-04-22 1:45 GMT+02:00 Ben Slater : > I have

Re: read repair with consistency one

2018-04-21 Thread Ben Slater
> I'm a bit confused with how read repair works in my case, which is: >> - multiple DCs with RF 1 (NetworkTopologyStrategy) >> - reads with consistency ONE >> >> >> The article #1 says that read repair in fact runs RF reads for some >> percent of the reque

Re: read repair with consistency one

2018-04-21 Thread Grzegorz Pietrusza
rs via Reaper or your own method it will resolve your > discrepencies. > > On Apr 21, 2018, 3:16 AM -0400, Grzegorz Pietrusza , > wrote: > > Hi all > > I'm a bit confused with how read repair works in my case, which is: > - multiple DCs with RF 1 (NetworkTopologyStrat

Re: read repair with consistency one

2018-04-21 Thread Rahul Singh
my case, which is: > - multiple DCs with RF 1 (NetworkTopologyStrategy) > - reads with consistency ONE > > > The article #1 says that read repair in fact runs RF reads for some percent > of the requests. Let's say I have read_repair_chance = 0.1. Does it mean that >

read repair with consistency one

2018-04-21 Thread Grzegorz Pietrusza
Hi all I'm a bit confused with how read repair works in my case, which is: - multiple DCs with RF 1 (NetworkTopologyStrategy) - reads with consistency ONE The article #1 says that read repair in fact runs RF reads for some percent of the requests. Let's say I have read_repair_chance =

Re: Measuring eventual consistency latency

2018-03-26 Thread Jeronimo de A. Barros
/CASSANDRA-11569 > > Cheers, > > Christophe > > On 26 March 2018 at 10:01, Jeronimo de A. Barros < > jeronimo.bar...@gmail.com> wrote: > >> I'd like to know if there is a reasonable method to measure how long take >> to have the data available across a

Re: Measuring eventual consistency latency

2018-03-25 Thread Jeff Jirsa
easonable method to measure how long take to >> have the data available across all replica nodes in a multi DC environment >> using LOCAL_ONE or LOCAL_QUORUM consistency levels. >> >> If already there be a study about this topic in some place and someone could >> point me

Re: Measuring eventual consistency latency

2018-03-25 Thread Christophe Schmitz
: > I'd like to know if there is a reasonable method to measure how long take > to have the data available across all replica nodes in a multi DC > environment using LOCAL_ONE or LOCAL_QUORUM consistency levels. > > If already there be a study about this topic in some place and some

Measuring eventual consistency latency

2018-03-25 Thread Jeronimo de A. Barros
I'd like to know if there is a reasonable method to measure how long take to have the data available across all replica nodes in a multi DC environment using LOCAL_ONE or LOCAL_QUORUM consistency levels. If already there be a study about this topic in some place and someone could point m

Consistency level for the COPY command

2018-03-09 Thread Jai Bheemsen Rao Dhanwada
Hello, What is the consistency level used when performing COPY command using CQL interface? don't see anything in the documents https://docs.datastax.com/en/cql/3.1/cql/cql_reference/copy_r.html I am setting CONSISTENCY LEVEL at the cql level and then running a copy command, does that

Re: Driver consistency issue

2018-02-27 Thread Jeff Jirsa
in Cassandra (cassandra version 3.0.9- total 12 Servers >>>> )With below definition: >>>> >>>> {'DC1': '2', 'class': >>>> 'org.apache.cassandra.locator.NetworkTopologyStrategy'} >>>> >&g

Re: Driver consistency issue

2018-02-27 Thread horschi
gt; oleksandr.shul...@zalando.de> wrote: >>> >>>> On Tue, Feb 27, 2018 at 9:45 AM, Abhishek Kumar Maheshwari < >>>> abhishek.maheshw...@timesinternet.in> wrote: >>>> >>>>> >>>>> i have a KeySpace in Cassandra (cassandra vers

Re: Driver consistency issue

2018-02-27 Thread Abhishek Kumar Maheshwari
heshw...@timesinternet.in> wrote: >>> >>>> >>>> i have a KeySpace in Cassandra (cassandra version 3.0.9- total 12 >>>> Servers )With below definition: >>>> >>>> {'DC1': '2', 'class': 'o

Re: Driver consistency issue

2018-02-27 Thread Nicolas Guyomar
>> >>> >>> i have a KeySpace in Cassandra (cassandra version 3.0.9- total 12 >>> Servers )With below definition: >>> >>> {'DC1': '2', 'class': 'org.apache.cassandra.locator. >>> NetworkTopologyStrategy'} &

Re: Driver consistency issue

2018-02-27 Thread Abhishek Kumar Maheshwari
s >> )With below definition: >> >> {'DC1': '2', 'class': 'org.apache.cassandra.locator. >> NetworkTopologyStrategy'} >> >> Some time i am getting below exception >> >> [snip] > >> Caused by: com.datastax.d

Re: Driver consistency issue

2018-02-27 Thread Oleksandr Shulgin
;org.apache.cassandra.locator. > NetworkTopologyStrategy'} > > Some time i am getting below exception > > [snip] > Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException: > Cassandra timeout during write query at consistency

Driver consistency issue

2018-02-27 Thread Abhishek Kumar Maheshwari
eptions.WriteTimeoutException: Cassandra timeout during write query at consistency QUORUM (3 replica were required but only 2 acknowledged the write) at com.datastax.driver.core.exceptions.WriteTimeoutException.copy(WriteTimeoutException.java:73) at com.datastax.driver.co

Re: consistency against rebuild a new DC

2017-11-27 Thread kurt greaves
No. Rebuilds don't keep consistency as they aren't smart enough to stream from a specific replica, this all replicas for a rebuild can stream from a single replica. You need to repair after rebuilding. If you're using NTS with #racks >= RF you can stream consistently. if this p

consistency against rebuild a new DC

2017-11-27 Thread Peng Xiao
Hi there, We know that we need to run repair regularly to make data consistency,suppose we have DC1 & DC2, if we add a new DC3 and rebuild from DC1,can we suppose the DC3 is consistency with DC1 at least at the time when DC3 is rebuild successfully? Thanks, Peng Xiao,

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-03 Thread Daniel Hölbling-Inzko
7;: > 'NetworkTopologyStrategy', 'datacenter1': 2}; > cqlsh> select * from test_rf.t1; > > id | data > +------ > > (0 rows) > > And in my test this happens on all nodes at the same time. Explanation is > fairly simple: now a different node is r

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-03 Thread Oleksandr Shulgin
er1': 2}; cqlsh> select * from test_rf.t1; id | data +-- (0 rows) And in my test this happens on all nodes at the same time. Explanation is fairly simple: now a different node is responsible for the data that was written to only one other node previously. A repair in this tiny

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-03 Thread Daniel Hölbling-Inzko
storage would be ok as every query would be checked by another > node too. I was only seeing inconsistencies since clients went directly to > the node with Consistency ONE > > Greetings > Jeff Jirsa schrieb am Mi. 2. Aug. 2017 um 16:01: > >> By the time bootstrap is complete i

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-03 Thread Jeff Jirsa
Reads via storage would be ok as every query would be checked by another node > too. I was only seeing inconsistencies since clients went directly to the > node with Consistency ONE > > Greetings > Jeff Jirsa schrieb am Mi. 2. Aug. 2017 um 16:01: >> By the time bootst

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-02 Thread Daniel Hölbling-Inzko
Thanks Jeff. How do I determine that bootstrap is finished? Haven't seen that anywhere so far. Reads via storage would be ok as every query would be checked by another node too. I was only seeing inconsistencies since clients went directly to the node with Consistency ONE Greetings Jeff

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-02 Thread Jeff Jirsa
nt by the time bootstrap finishes -- Jeff Jirsa > On Aug 2, 2017, at 1:53 AM, Daniel Hölbling-Inzko > wrote: > > Hi, > It's probably a strange question but I have a heavily read-optimized payload > where data integrity is not a big deal. So to keep latencies low I am

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-02 Thread kurt greaves
only in this one case might that work (RF==N)

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-02 Thread Oleksandr Shulgin
On Wed, Aug 2, 2017 at 10:53 AM, Daniel Hölbling-Inzko < daniel.hoelbling-in...@bitmovin.com> wrote: > > Any advice on how to avoid this in the future? Is there a way to start up > a node that does not serve client requests but does replicate data? > Would it not work if you first increase the RF

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-02 Thread kurt greaves
You can't just add a new DC and then tell their clients to connect to the new one (after migrating all the data to it obv.)? If you can't achieve that you should probably use GossipingPropertyFileSnitch.​ Your best plan is to have the desired RF/redundancy from the start. Changing RF in production

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-02 Thread Daniel Hölbling-Inzko
Thanks for the pointers Kurt! I did increase the RF to N so that would not have been the issue. DC migration is also a problem since I am using the Google Cloud Snitch. So I'd have to take down the whole DC and restart anew (which would mess with my clients as they only connect to their local DC).

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-02 Thread kurt greaves
If you want to change RF on a live system your best bet is through DC migration (add another DC with the desired # of nodes and RF), and migrate your clients to use that DC. There is a way to boot a node and not join the ring, however I don't think it will work for new nodes (have not confirmed), a

Bootstrapping a new Node with Consistency=ONE

2017-08-02 Thread Daniel Hölbling-Inzko
Hi, It's probably a strange question but I have a heavily read-optimized payload where data integrity is not a big deal. So to keep latencies low I am reading with Consistency ONE from my Multi-DC Cluster. Now the issue I saw is that I needed to add another Cassandra node (for redundancy re

Re: Cannot achieve consistency level LOCAL_ONE

2017-07-09 Thread Justin Cameron
It's best-practice to disable the default user ("cassandra" user) after enabling password authentication on your cluster. The default user reads with a CL.QUORUM when authenticating, while other users use CL.LOCAL_ONE. This means it's more likely you could experience authentication issues, even if

Re: Cannot achieve consistency level LOCAL_ONE

2017-07-07 Thread Oleksandr Shulgin
On Thu, Jul 6, 2017 at 6:58 PM, Charulata Sharma (charshar) < chars...@cisco.com> wrote: > Hi, > > I am facing similar issues with SYSTEM_AUTH keyspace and wanted to know > the implication of disabling the "*cassandra*" superuser. > Unless you have scheduled any tasks that require the user with t

Re: Cannot achieve consistency level LOCAL_ONE

2017-07-06 Thread Charulata Sharma (charshar)
4, 2017 at 2:16 AM To: Oleksandr Shulgin mailto:oleksandr.shul...@zalando.de>> Cc: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" mailto:user@cassandra.apache.org>> Subject: Re: Cannot achieve consistency level LOCAL_ONE Thanks for the detail explanation

Re: Impact of Write without consistency level and mutation failures on reads and cluster

2017-06-16 Thread Jeff Jirsa
On 2017-06-15 19:10 (-0700), srinivasarao daruna wrote: > Hi, > > Recently one of our spark job had missed cassandra consistency property and > number of concurrent writes property. Just for the record, you still have a consistency level set, it's just set to whatever

Impact of Write without consistency level and mutation failures on reads and cluster

2017-06-15 Thread srinivasarao daruna
Hi, Recently one of our spark job had missed cassandra consistency property and number of concurrent writes property. Due to that, some of mutations are failed when we checked tpstats. Also, we observed readtimeouts are occurring with not only the table that the job inserts, but also from other

Re: Cannot achieve consistency level LOCAL_ONE

2017-06-14 Thread wxn...@zjqunshuo.com
Thanks for the detail explanation. You did solve my problem. Cheers, -Simon From: Oleksandr Shulgin Date: 2017-06-14 17:09 To: wxn...@zjqunshuo.com CC: user Subject: Re: Cannot achieve consistency level LOCAL_ONE On Wed, Jun 14, 2017 at 10:46 AM, wxn...@zjqunshuo.com wrote: Thanks for the

Re: Cannot achieve consistency level LOCAL_ONE

2017-06-14 Thread Oleksandr Shulgin
On Wed, Jun 14, 2017 at 10:46 AM, wxn...@zjqunshuo.com wrote: > Thanks for the reply. > My system_auth settings is as below and what should I do with it? And I'm > interested why the newly added node is responsible for the user > authentication? > > CREATE KEYSPACE system_auth WITH replication =

Re: Cannot achieve consistency level LOCAL_ONE

2017-06-14 Thread wxn...@zjqunshuo.com
replication_factor': '1'} AND durable_writes = true; -Simon From: Oleksandr Shulgin Date: 2017-06-14 16:36 To: wxn...@zjqunshuo.com CC: user Subject: Re: Cannot achieve consistency level LOCAL_ONE On Wed, Jun 14, 2017 at 9:11 AM, wxn...@zjqunshuo.com wrote: Hi, Cluster set up: 1 DC with 5

Re: Cannot achieve consistency level LOCAL_ONE

2017-06-14 Thread Oleksandr Shulgin
During the down > period, all 4 other nodes report "Cannot achieve consistency > level LOCAL_ONE" constantly until I brought up the dead node. My data > seems lost during that down time. To me this could not happen because the > write CL is LOCAL_ONE and only one node was dea

  1   2   3   4   5   6   7   8   9   >