Hi,
On Fri, May 17, 2024 at 6:18 PM Jon Haddad wrote:
> I strongly suggest you don't use materialized views at all. There are
> edge cases that in my opinion make them unsuitable for production, both in
> terms of cluster stability as well as data integrity.
>
Oh, there is already an open and
(~100 MB at all), the keyspace's
>> replication factor is 3, everything is works fine... except: if I restart a
>> node, I get a lot of errors with materialized views and consistency level
>> ONE, but only for those tables for which there is more than one
>> materialize
a lot of errors with materialized views and consistency level
> ONE, but only for those tables for which there is more than one
> materialized view.
>
> Tables without materialized view don't have it, works fine.
> Tables that have it, but only one materialized view, also work
a lot of errors with materialized views and consistency level
ONE, but only for those tables for which there is more than one
materialized view.
Tables without materialized view don't have it, works fine.
Tables that have it, but only one materialized view, also works fine.
But, a table with
So for upgrading Paxos to v2, the non-serial consistency level should be set to
ANY or LOCAL_QUORUM, and the serial consistency level should still be SERIAL or
LOCAL_SERIAL. Got it, thanks!
From: Laxmikant Upadhyay
Date: Tuesday, 12 March 2024 at 7:33 am
To: user@cassandra.apache.org
Cc: Weng
You need to set both in case of lwt. your regular non -serial consistency
level will only applied during commit phase of lwt.
On Wed, 6 Mar, 2024, 03:30 Weng, Justin via user,
wrote:
> Hi Cassandra Community,
>
>
>
> I’ve been investigating Cassandra Paxos v2 (as implemented in
commit consistency level for LWT after upgrading
Paxos.
In
cqlsh<https://docs.datastax.com/en/cql-oss/3.3/cql/cql_reference/cqlshSerialConsistency.html>,
gocql<https://github.com/gocql/gocql/blob/master/session.go#L1247> and Python
driver<https://docs.datastax.com/en/developer/p
My experience to debug this kind of issue is to turn on trace. The nice
thing in cassandra is:
you can turn on trace only on 1 node and with a small percentage, i.e.
nodetool settraceprobability 0.05 --- only run on 1 node.
Hope it helps.
Regards,
James
On Thu, Jul 21, 2022 at 2:50 PM Tolbert
I'd bet the JIRA that Paul is pointing to is likely what's happening
here. I'd look for read repair errors in your system logs or in your
metrics (if you have easy access to them).
There are operations that can happen during the course of a query
being executed that may happen at different CLs,
see if that ticket applies to your
experience.
Thanks
Paul Chandler
> On 21 Jul 2022, at 15:12, pwozniak wrote:
>
> Yes, I did it. Nothing like this in my code. Consistency level is set only in
> one place (shown below).
>
>
>
> On 7/21/22 4:08 PM, manish khandelw
Yes, I did it. Nothing like this in my code. Consistency level is set
only in one place (shown below).
On 7/21/22 4:08 PM, manish khandelwal wrote:
Consistency can also be set on a statement basis. So please check in
your code that you might be setting consistency 'ALL' for some que
It doesn't make any sense to see consistency level ALL if the code is
not explicitly using it. My best guess is somewhere in the code the
consistency level was overridden.
On 21/07/2022 14:52, pwozniak wrote:
Hi,
we have the following code (java driver):
cluster =Cluster.bu
Consistency can also be set on a statement basis. So please check in your
code that you might be setting consistency 'ALL' for some queries.
On Thu, Jul 21, 2022 at 7:23 PM pwozniak wrote:
> Hi,
>
> we have the following code (java driver):
>
> cluster = Cluster.b
))
.withTimestampGenerator(new AtomicMonotonicTimestampGenerator())
.withCredentials(userName, password).build();
session =cluster.connect(keyspaceName);
where ConsistencyLevel.QUORUM is our default consistency level. But we
keep receiving the following exceptions
Not in 3.11, though 4.0 adds preview repair which can sorta do this if
you're also running incremental repair.
Just run nodetool repair, use subranges if needed. If you stream data,
they're out of sync. If you don't stream data, they're in sync.
On Thu, Jul 9, 2020 at 3:16 PM Jai Bheemsen Rao D
Hello,
I am trying to expand my C* cluster to a new region, followed by keyspace
expansion and nodetool rebuild -- sourceDC.
Once the rebuild process is complete, is there a way to identify if all the
data between two regions is in sync? Since the data size is large, I
cannot run select count(*).
This is how getConsistencyLevel method is implemented. This method returns
consistencylevel of the query or null if no consistency level has been set
using setConsistencyLevel.
Regards
Manish
On Fri, Jun 12, 2020 at 3:43 PM Manu Chadha wrote:
> Hi
>
> In my Cassandra Java driver c
Hi
In my Cassandra Java driver code, I am creating a query and then I print the
consistency level of the query
val whereClause = whereConditions(tablename, id);
cassandraRepositoryLogger.trace("getRowsByPartitionKeyId: looking in table
"+tablename+" wit
Cassandra.
>
> I believe swites from 2 different clients at (essentially precisely) the same
> time on the same table & row have no knowledge of one another. Each unique
> LWT did what was asked of it, read the data and wrote as requested. Last
> write won. This is the definition
the definition of eventual
consistency, and you found an edge case for LWT usage.
There may be other suggestions, but I think the simplest method to get
as close to a guarantee that your LWT functions as you wish, would be to
take the parallel access out of the equation. Create a canonical
user
update from
app1 (running on node1 succeeded - i.e. ResultSet#wasApplied returns true).
However, when app2 (on node2 reads the data, it is getting stale data before
app1 updated it).
I'd like to know why this is happening because LTW with serial consistency
should prevent this ty
Yes Jeff, want to achieve the same ( *You’re trying to go local quorum in
one dc to local quorum in the other dc without losing any writes*)
Thanks for your quick response.
Regards
Manish
On Mon, Mar 16, 2020 at 10:58 AM Jeff Jirsa wrote:
>
> You’re trying to go local quorum in one dc to lo
You’re trying to go local quorum in one dc to local quorum in the other dc
without losing any writes?
The easiest way to do this strictly correctly is to take the latency hit and do
quorum while you run repair, then you can switch to local quorum on the other
side.
A few more notes inline
While switching over datacenters, there is a chance of mutation drop
because of which inconsistency may occur.
To avoid inconsistency we can do following :
Monitor and if require then run repair
1. Monitor tpstats in all nodes. If dropped message count is 0, it
can be inferred no mutation
On Thu, Jan 16, 2020 at 3:18 PM Laxmikant Upadhyay
wrote:
>
> You are right, that will solve the problem. but unfortunately i won't be
> able to meet my sla with write each quorum . I am using local quorum for
> both read and write.
> Any other way ?
>
Is you read SLO more sensitive than write S
Hello Laxmiant,
your application has to deal with eventually consistency if you are using
cassandra. Ensure to have
R + W > RF
And have the repairs runing periodically. This is the best way to be the
most cosistent and coherent
Jean Carlo
"The best way to predict the future is to i
ctive DC (let's say using local_quorum).
>>
>> you have "to switch" your clients without any issues since your writes
>> are replicated on all DC.
>> --> that is not true because there is a chance of mutation drop. (Hints,
>> read repair may
u have "to switch" your clients without any issues since your writes are
> replicated on all DC.
> --> that is not true because there is a chance of mutation drop. (Hints,
> read repair may help to some extent but data consistency is not guaranteed
> unless you run anti- ent
cated on all DC.
--> that is not true because there is a chance of mutation drop. (Hints,
read repair may help to some extent but data consistency is not guaranteed
unless you run anti- entropy repair )
On Thu, Jan 16, 2020, 3:45 PM Ahmed Eljami wrote:
> Hello,
>
> What do you m
> I am thinking of below approaches :
>
> 1.Before switching run the repair (although it assure consistency mostly
> but repair itself may take long time to complete)
>
> 2. Monitor the dropped message bean : If no message dropped since last
> successful repair then it i
consistency mostly
but repair itself may take long time to complete)
2. Monitor the dropped message bean : If no message dropped since last
successful repair then it is good to switch without running repair.
3. Monitor the hints backlog (files in hint directory), if no backlog then
it is good to
On Sat, Jun 29, 2019 at 6:19 AM Nimbus Lin wrote:
>
> On the 2nd question, would you like to tell me how to change a
> write's and a read's consistency level separately in cqlsh?
>
Not that I know of special syntax for that, but you may add an explicit
"CONSIST
n Jconsole latter.
On the 2nd question, would you like to tell me how to change a write's
and a read's consistency level separately in cqlsh?
Otherwise, how the document's R+W>Replicator to realize to guarantee a strong
consistency write and read?
Thank you!
Si
t; >
> > Thank you for your response!
> >
> > Hmm, my understanding is slightly different I think. Please let me try
> to explain one situation and let me know what you think.
> >
> > 1. Do a LWT write with serial_consistency = SERIAL (default) and
> cons
rent I think. Please let me try to
> explain one situation and let me know what you think.
>
> 1. Do a LWT write with serial_consistency = SERIAL (default) and consistency
> = ONE.
> 2. LWT starts its Paxos phase and has communicated with a quorum of nodes
> 3. At this point a read
Dear Hiro,
Thank you for your response!
Hmm, my understanding is slightly different I think. Please let me try to
explain one situation and let me know what you think.
1. Do a LWT write with serial_consistency = SERIAL (default) and
consistency = ONE.
2. LWT starts its Paxos phase and has
Hi Craig,
I'm not 100 % sure about some corner cases,
but I'm sure that LWT should be used with the following consistency
levels usually.
LWT write:
serial_consistency_level: SERIAL
consistency_level: QUORUM
LWT read:
consistency_level: SERIAL
(It's a bit weird and mis-leading a
Hello!
I am trying to understand the consistency level (not serial consistency)
required for LWTs. Basically what I am trying to understand is that if a
consistency level of ONE is enough for a LWT write operation if I do my
read with a consistency level of SERIAL?
It would seem so based on what
Short answer is no, because missing consistency isn’t an error and there’s no
way to know you’ve missed data without reading at ALL, and if it were ok to
read at ALL you’d already be doing it (it’s not ok for most apps).
> On May 7, 2019, at 8:05 AM, Fd Habash wrote:
>
> Typicall
Typically, when a read is submitted to C*, it may complete with …
1. No errors & returns expected data
2. Errors out with UnavailableException
3. No error & returns zero rows on first attempt, but returned on subsequent
runs.
The third scenario happens as a result of cluster entropy specially d
s://www.instaclustr.com/platform/
>
>
>
>
>
>
>
>
>
>
>
> *From: *Fd Habash
> *Reply-To: *"user@cassandra.apache.org"
> *Date: *Wednesday, 1 May 2019 at 06:18
> *To: *"user@cassandra.apache.org"
> *Subject: *Bootstrapping to Replace a
nesday, 1 May 2019 at 06:18
> To: "user@cassandra.apache.org"
> Subject: Bootstrapping to Replace a Dead Node vs. Adding a New Node:
> Consistency Guarantees
>
> Reviewing the documentation & based on my testing, using C* 2.2.8, I was not
> able to extend the
/platform/
From: Fd Habash
Reply-To: "user@cassandra.apache.org"
Date: Wednesday, 1 May 2019 at 06:18
To: "user@cassandra.apache.org"
Subject: Bootstrapping to Replace a Dead Node vs. Adding a New Node:
Consistency Guarantees
Reviewing the documentation & based on my
Reviewing the documentation & based on my testing, using C* 2.2.8, I was not
able to extend the cluster by adding multiple nodes simultaneously. I got an
error message …
Other bootstrapping/leaving/moving nodes detected, cannot bootstrap while
cassandra.consistent.rangemovement is true
I unde
issues.apache.org/jira/browse/CASSANDRA-9620 that says ' *Writing
> the batch log will always be done using CL ONE.*' Contradict what I
> understood from datastax's doc
>
> Yes I understood batches are not for speed. Still we are using it for a
> consistency need.
>
&
e.org/jira/browse/CASSANDRA-9620
that says ' *Writing the batch log will always be done using CL ONE.*'
Contradict what I understood from datastax's doc
Yes I understood batches are not for speed. Still we are using it for a
consistency need.
@Mahesh Yes we do set the consistency l
: [EXTERNAL] Re: Getting Consistency level TWO when it is requested
LOCAL_ONE
Hi Jean,
I want to understand how you are setting the write consistency level as LOCAL
ONE. That is with every query you mentioning consistency level or you have set
the spring cassandra config with provided
Hi Jean,
I want to understand how you are setting the write consistency level as
LOCAL ONE. That is with every query you mentioning consistency level or you
have set the spring cassandra config with provided consistency level.
Like this:
cluster.setQueryOptions(new
QueryOptions
Hello everyone,
I have a case where the developers are using spring data framework for
Cassandra. We are writing batches setting consistency level at LOCAL_ONE
but we got a timeout like this
*Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException:
Cassandra timeout during BATCH_LOG
SCRIBE KEYSPACE system_auth;"
Or to check them all: cqlsh -e "DESCRIBE KEYSPACES;"
I don't know what's wrong exactly, but your application is truncating with
a consistency level of 'ALL', meaning all the replicas must be up for your
application to work.
Le mer. 19 sept. 2
*To:* user@cassandra.apache.org
*Subject:* Error during truncate: Cannot achieve consistency level ALL ,
how to fix it
Hi All,
I am new to Cassandra. Following below link
https://grokonez.com/spring-framework/spring-data/start-spring-data-cassandra-springboot#III_Sourcec
What RF is your system_auth keyspace?
If its one, match it to the user keyspace, and restart the node.
From: sha p [mailto:shatestt...@gmail.com]
Sent: 19 September 2018 11:49
To: user@cassandra.apache.org
Subject: Error during truncate: Cannot achieve consistency level ALL , how to
fix it
Hi
t with RF = 2 , but when I run
>>> this application from above source code bellow error is thrown
>>> Caused by: com.datastax.driver.core.exceptions.TruncateException: Error
>>> during truncate: Cannot achieve consistency level ALL """
>>>
>>>
>>> What wrong i am doing here ..How to fix it ? Plz help me.
>>>
>>> Regards,
>>> Shyam
>>>
>>
ssandra, I think the optimal approach
>> for my use-case would be to replicate the data on *ALL* nodes possible,
>> but require reads to only have a consistency level of one. So, in the case
>> that a node goes down, we can still read/write to other nodes. It is not
>>
r my use-case would be to replicate the data on *ALL* nodes possible,
> but require reads to only have a consistency level of one. So, in the case
> that a node goes down, we can still read/write to other nodes. It is not
> very important that a read be unanimously agreed upon, as long a
replicate the data on ALL nodes possible, but require
reads to only have a consistency level of one. So, in the case that a node goes
down, we can still read/write to other nodes. It is not very important that a
read be unanimously agreed upon, as long as Cassandra is eventually consistent,
within
t i'm short of resources)
> and WCl=ONE and RCL=ONE in a cluster of 10 nodes in a insert-only scenario.
> The problem: i dont want to use nodetool repair because it would put hige
> load on my cluster for a long time, but also i need data consistency
> and fault tolerance in a
ould put hige load
on my cluster for a long time, but also i need data consistency
and fault tolerance in a way that:
if one of my nodes fails:
1. there would be no single record data loss
This requires write > 1
2. write/read of data would be continued with no problem
This
odetool repair because it would put hige
> load on my cluster for a long time, but also i need data consistency
> and fault tolerance in a way that:
> if one of my nodes fails:
> 1. there would be no single record data loss
This requires write > 1
> 2. write/read of data would be
need data consistency
and fault tolerance in a way that:
if one of my nodes fails:
1. there would be no single record data loss
2. write/read of data would be continued with no problem
I know that current config won't satisfy No.1, so changed the Write Consistensy
Level to ALL and to satisfy No.
Hi Ben
Thanks a lot. From my analysis of the code it looks like you are right.
When global read repair kicks in all live endpoints are queried for data,
regardless of consistency level. Only EACH_QUORUM is treated differently.
Cheers
Grzegorz
2018-04-22 1:45 GMT+02:00 Ben Slater :
> I have
> I'm a bit confused with how read repair works in my case, which is:
>> - multiple DCs with RF 1 (NetworkTopologyStrategy)
>> - reads with consistency ONE
>>
>>
>> The article #1 says that read repair in fact runs RF reads for some
>> percent of the reque
rs via Reaper or your own method it will resolve your
> discrepencies.
>
> On Apr 21, 2018, 3:16 AM -0400, Grzegorz Pietrusza ,
> wrote:
>
> Hi all
>
> I'm a bit confused with how read repair works in my case, which is:
> - multiple DCs with RF 1 (NetworkTopologyStrat
my case, which is:
> - multiple DCs with RF 1 (NetworkTopologyStrategy)
> - reads with consistency ONE
>
>
> The article #1 says that read repair in fact runs RF reads for some percent
> of the requests. Let's say I have read_repair_chance = 0.1. Does it mean that
>
Hi all
I'm a bit confused with how read repair works in my case, which is:
- multiple DCs with RF 1 (NetworkTopologyStrategy)
- reads with consistency ONE
The article #1 says that read repair in fact runs RF reads for some percent
of the requests. Let's say I have read_repair_chance =
/CASSANDRA-11569
>
> Cheers,
>
> Christophe
>
> On 26 March 2018 at 10:01, Jeronimo de A. Barros <
> jeronimo.bar...@gmail.com> wrote:
>
>> I'd like to know if there is a reasonable method to measure how long take
>> to have the data available across a
easonable method to measure how long take to
>> have the data available across all replica nodes in a multi DC environment
>> using LOCAL_ONE or LOCAL_QUORUM consistency levels.
>>
>> If already there be a study about this topic in some place and someone could
>> point me
:
> I'd like to know if there is a reasonable method to measure how long take
> to have the data available across all replica nodes in a multi DC
> environment using LOCAL_ONE or LOCAL_QUORUM consistency levels.
>
> If already there be a study about this topic in some place and some
I'd like to know if there is a reasonable method to measure how long take
to have the data available across all replica nodes in a multi DC
environment using LOCAL_ONE or LOCAL_QUORUM consistency levels.
If already there be a study about this topic in some place and someone
could point m
Hello,
What is the consistency level used when performing COPY command using CQL
interface?
don't see anything in the documents
https://docs.datastax.com/en/cql/3.1/cql/cql_reference/copy_r.html
I am setting CONSISTENCY LEVEL at the cql level and then running a copy
command, does that
in Cassandra (cassandra version 3.0.9- total 12 Servers
>>>> )With below definition:
>>>>
>>>> {'DC1': '2', 'class':
>>>> 'org.apache.cassandra.locator.NetworkTopologyStrategy'}
>>>>
>&g
gt; oleksandr.shul...@zalando.de> wrote:
>>>
>>>> On Tue, Feb 27, 2018 at 9:45 AM, Abhishek Kumar Maheshwari <
>>>> abhishek.maheshw...@timesinternet.in> wrote:
>>>>
>>>>>
>>>>> i have a KeySpace in Cassandra (cassandra vers
heshw...@timesinternet.in> wrote:
>>>
>>>>
>>>> i have a KeySpace in Cassandra (cassandra version 3.0.9- total 12
>>>> Servers )With below definition:
>>>>
>>>> {'DC1': '2', 'class': 'o
>>
>>>
>>> i have a KeySpace in Cassandra (cassandra version 3.0.9- total 12
>>> Servers )With below definition:
>>>
>>> {'DC1': '2', 'class': 'org.apache.cassandra.locator.
>>> NetworkTopologyStrategy'}
&
s
>> )With below definition:
>>
>> {'DC1': '2', 'class': 'org.apache.cassandra.locator.
>> NetworkTopologyStrategy'}
>>
>> Some time i am getting below exception
>>
>> [snip]
>
>> Caused by: com.datastax.d
;org.apache.cassandra.locator.
> NetworkTopologyStrategy'}
>
> Some time i am getting below exception
>
> [snip]
> Caused by: com.datastax.driver.core.exceptions.WriteTimeoutException:
> Cassandra timeout during write query at consistency
eptions.WriteTimeoutException: Cassandra
timeout during write query at consistency QUORUM (3 replica were required
but only 2 acknowledged the write)
at
com.datastax.driver.core.exceptions.WriteTimeoutException.copy(WriteTimeoutException.java:73)
at
com.datastax.driver.co
No. Rebuilds don't keep consistency as they aren't smart enough to stream
from a specific replica, this all replicas for a rebuild can stream from a
single replica. You need to repair after rebuilding.
If you're using NTS with #racks >= RF you can stream consistently. if this
p
Hi there,
We know that we need to run repair regularly to make data consistency,suppose
we have DC1 & DC2,
if we add a new DC3 and rebuild from DC1,can we suppose the DC3 is consistency
with DC1 at least at the time when DC3 is rebuild successfully?
Thanks,
Peng Xiao,
7;:
> 'NetworkTopologyStrategy', 'datacenter1': 2};
> cqlsh> select * from test_rf.t1;
>
> id | data
> +------
>
> (0 rows)
>
> And in my test this happens on all nodes at the same time. Explanation is
> fairly simple: now a different node is r
er1': 2};
cqlsh> select * from test_rf.t1;
id | data
+--
(0 rows)
And in my test this happens on all nodes at the same time. Explanation is
fairly simple: now a different node is responsible for the data that was
written to only one other node previously.
A repair in this tiny
storage would be ok as every query would be checked by another
> node too. I was only seeing inconsistencies since clients went directly to
> the node with Consistency ONE
>
> Greetings
> Jeff Jirsa schrieb am Mi. 2. Aug. 2017 um 16:01:
>
>> By the time bootstrap is complete i
Reads via storage would be ok as every query would be checked by another node
> too. I was only seeing inconsistencies since clients went directly to the
> node with Consistency ONE
>
> Greetings
> Jeff Jirsa schrieb am Mi. 2. Aug. 2017 um 16:01:
>> By the time bootst
Thanks Jeff. How do I determine that bootstrap is finished? Haven't seen
that anywhere so far.
Reads via storage would be ok as every query would be checked by another
node too. I was only seeing inconsistencies since clients went directly to
the node with Consistency ONE
Greetings
Jeff
nt by the time
bootstrap finishes
--
Jeff Jirsa
> On Aug 2, 2017, at 1:53 AM, Daniel Hölbling-Inzko
> wrote:
>
> Hi,
> It's probably a strange question but I have a heavily read-optimized payload
> where data integrity is not a big deal. So to keep latencies low I am
only in this one case might that work (RF==N)
On Wed, Aug 2, 2017 at 10:53 AM, Daniel Hölbling-Inzko <
daniel.hoelbling-in...@bitmovin.com> wrote:
>
> Any advice on how to avoid this in the future? Is there a way to start up
> a node that does not serve client requests but does replicate data?
>
Would it not work if you first increase the RF
You can't just add a new DC and then tell their clients to connect to the
new one (after migrating all the data to it obv.)? If you can't achieve
that you should probably use GossipingPropertyFileSnitch. Your best plan
is to have the desired RF/redundancy from the start. Changing RF in
production
Thanks for the pointers Kurt!
I did increase the RF to N so that would not have been the issue.
DC migration is also a problem since I am using the Google Cloud Snitch. So
I'd have to take down the whole DC and restart anew (which would mess with
my clients as they only connect to their local DC).
If you want to change RF on a live system your best bet is through DC
migration (add another DC with the desired # of nodes and RF), and migrate
your clients to use that DC. There is a way to boot a node and not join the
ring, however I don't think it will work for new nodes (have not
confirmed), a
Hi,
It's probably a strange question but I have a heavily read-optimized
payload where data integrity is not a big deal. So to keep latencies low I
am reading with Consistency ONE from my Multi-DC Cluster.
Now the issue I saw is that I needed to add another Cassandra node (for
redundancy re
It's best-practice to disable the default user ("cassandra" user) after
enabling password authentication on your cluster. The default user reads
with a CL.QUORUM when authenticating, while other users use CL.LOCAL_ONE.
This means it's more likely you could experience authentication issues,
even if
On Thu, Jul 6, 2017 at 6:58 PM, Charulata Sharma (charshar) <
chars...@cisco.com> wrote:
> Hi,
>
> I am facing similar issues with SYSTEM_AUTH keyspace and wanted to know
> the implication of disabling the "*cassandra*" superuser.
>
Unless you have scheduled any tasks that require the user with t
4, 2017 at 2:16 AM
To: Oleksandr Shulgin
mailto:oleksandr.shul...@zalando.de>>
Cc: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: Cannot achieve consistency level LOCAL_ONE
Thanks for the detail explanation
On 2017-06-15 19:10 (-0700), srinivasarao daruna
wrote:
> Hi,
>
> Recently one of our spark job had missed cassandra consistency property and
> number of concurrent writes property.
Just for the record, you still have a consistency level set, it's just set to
whatever
Hi,
Recently one of our spark job had missed cassandra consistency property and
number of concurrent writes property.
Due to that, some of mutations are failed when we checked tpstats. Also, we
observed readtimeouts are occurring with not only the table that the job
inserts, but also from other
Thanks for the detail explanation. You did solve my problem.
Cheers,
-Simon
From: Oleksandr Shulgin
Date: 2017-06-14 17:09
To: wxn...@zjqunshuo.com
CC: user
Subject: Re: Cannot achieve consistency level LOCAL_ONE
On Wed, Jun 14, 2017 at 10:46 AM, wxn...@zjqunshuo.com
wrote:
Thanks for the
On Wed, Jun 14, 2017 at 10:46 AM, wxn...@zjqunshuo.com wrote:
> Thanks for the reply.
> My system_auth settings is as below and what should I do with it? And I'm
> interested why the newly added node is responsible for the user
> authentication?
>
> CREATE KEYSPACE system_auth WITH replication =
replication_factor': '1'} AND durable_writes = true;
-Simon
From: Oleksandr Shulgin
Date: 2017-06-14 16:36
To: wxn...@zjqunshuo.com
CC: user
Subject: Re: Cannot achieve consistency level LOCAL_ONE
On Wed, Jun 14, 2017 at 9:11 AM, wxn...@zjqunshuo.com
wrote:
Hi,
Cluster set up:
1 DC with 5
During the down
> period, all 4 other nodes report "Cannot achieve consistency
> level LOCAL_ONE" constantly until I brought up the dead node. My data
> seems lost during that down time. To me this could not happen because the
> write CL is LOCAL_ONE and only one node was dea
1 - 100 of 878 matches
Mail list logo