Hi,
Can we know if a value is applied eventually after WriteTimeoutException?
I'm investigate the behavior of the counter write of Cassandra.
In CAS write, we can know the value is applied eventually when the
WriteType of the response is SIMPLE because it failed in the commit phase.
How about the
you’re blocked on internode and not commitlog? Batch is
> typically not what people expect (group commitlog in 4.0 is probably closer
> to what you think batch does).
>
> --
> Jeff Jirsa
>
>
> On Nov 27, 2018, at 10:55 PM, Yuji Ito wrote:
>
> Hi,
>
> Thank you
written using Netty in 4.0. It will be better to
> test it using that as potential changes will mostly land on top of that.
>
> On Mon, Nov 26, 2018 at 7:39 AM Yuji Ito wrote:
>
>> Hi,
>>
>> I'm investigating LWT performance with C* 3.11.3.
>> It looks that the
Our tests are based on riptano's great work.
https://github.com/riptano/jepsen/tree/cassandra/cassandra
I refined it for the latest Jepsen and removed some tests.
Next, I'll fix clock-drift tests.
I would like to get your feedback.
Thanks,
Yuji Ito
browse/CASSANDRA-9766
>
> To pick up those fixes, you'd want to benchmark 3.11.1 instead of 3.0.15
>
>
> On Tue, Oct 17, 2017 at 8:04 PM, Yuji Ito wrote:
>
>> Hi all,
>>
>> I'm comparing performance between Cassandra 2.2.10 and 3.0.15.
>> SELE
Hi all,
I'm comparing performance between Cassandra 2.2.10 and 3.0.15.
SELECT of 3.0.15 is faster than 2.2.10.
However, conditional INSERT and UPDATE of 3.0.15 are slower than 2.2.10.
Is it expected? If so, I want know why.
I'm trying to measure performance for non-conditional operation next.
En
built from yum?
> On Wed, May 10, 2017 at 9:54 PM Yuji Ito wrote:
>
>> Hi Joaquin,
>>
>> > Were both tests run from the same machine at close the same time?
>> Yes. I run the both tests within 30 min.
>> I retried them today. The result was the same as yeste
ts.
>
> Also, ensure that you're using the same test instance (that is not running
> Cassandra) to connect to both Cassandra clusters.
>
> Cheers,
>
> Joaquin
>
> Joaquin Casares
> Consultant
> Austin, TX
>
> Apache Cassandra Consulting
> http://www.thel
Hi all,
I'm trying a simple performance test.
The test requests select operations (CL.SERIAL or CL.QUORUM) by increasing
the number of threads.
There is the difference of the performance between C* installed by yum and
C* which I built by myself.
What causes the difference?
I use C* 2.2.8.
One of
/session
objects per instance.
But it is inefficient and uncommon.
So, we aren't sure that the application works when a lot of cluster/session
objects are created.
Is it correct?
Thank you,
Yuji
On Wed, Feb 8, 2017 at 12:01 PM, Ben Bromhead wrote:
> On Tue, 7 Feb 2017 at 17:52 Yuji Ito w
ecause credentials should be
>> usually set with a session but actually they are set with a cluster.
>>
>> So, if there are 1000 clients, then with this API it has to create
>> 1000 cluster instances ?
>> 1000 clients seems usual if there are many nodes (say 20) and ea
Hi all,
I want to know how to authenticate Cassandra users for multiple instances
with Java driver.
For instance, each thread creates a instance to access Cassandra with
authentication.
As the implementation example, only the first constructor builds a cluster
and a session.
Other constructors us
Hi Shalom,
I also got WriteTimeoutException in my destructive test like your test.
When did you drop a node?
A coordinator node sends a write request to all replicas.
When one of nodes was down while the request is executed, sometimes
WriteTimeOutException happens.
cf. http://www.datastax.com/de
Hi all,
I got OutOfMemoryError in startup process as below.
I have 3 questions about the error.
1. Why did Cassandra built by myself cause OutOfMemory errors?
OutOfMemory errors happened in startup process in some (not all) nodes on
Cassandra 2.2.8 which I got from github and built by myself.
Ho
he.org/doc/latest/tools/nodetool/
>> truncatehints.html?highlight=truncate)
>> > . However, it seems to clear all hints on particular endpoint, not just
>> for
>> > a specific table.
>> >
>> > Cheers
>> > Ben
>> >
>> > On Fri,
3600 -e "consistency serial; select
count(*) from testdb.testtbl;"
echo "restart C* process on node2"
pdsh -l $node2_user -R ssh -w $node2_ip "sudo /etc/init.d/cassandra start"
Thanks,
yuji
On Fri, Nov 18, 2016 at 7:52 PM, Yuji Ito wrote:
> I investig
o fix given there is a clear operational work-around.
>
> Cheers
> Ben
>
> On Thu, 24 Nov 2016 at 15:14 Yuji Ito wrote:
>
>> Hi Ben,
>>
>> I continue to investigate the data loss issue.
>> I'm investigating logs and source code and try to reproduce the dat
gt;
> Cheers
> Ben
>
> On Fri, 11 Nov 2016 at 13:07 Yuji Ito wrote:
>
>> Thanks Ben,
>>
>> I tried 2.2.8 and could reproduce the problem.
>> So, I'm investigating some bug fixes of repair and commitlog between
>> 2.2.8 and 3.0.9.
>>
>> - C
65028 (2016-11-17 08:36:05,028)
thanks,
yuji
On Wed, Nov 16, 2016 at 5:25 PM, Yuji Ito wrote:
> Hi,
>
> I could find stale data after truncating a table.
> It seems that truncating starts while recovery is being executed just
> after a node restarts.
> After the truncating fi
Hi,
I could find stale data after truncating a table.
It seems that truncating starts while recovery is being executed just after
a node restarts.
After the truncating finishes, recovery still continues?
Is it expected?
I use C* 2.2.8 and can reproduce it as below.
[create table]
cqlsh
;
> Good news that 3.0.9 fixes the problem so up to you if you want to
> investigate further and see if you can narrow it down to file a JIRA
> (although the first step of that would be trying 2.2.9 to make sure it’s
> not already fixed there).
>
> Cheers
> Ben
>
> On Wed, 9
I tried C* 3.0.9 instead of 2.2.
The data lost problem hasn't happen for now (without `nodetool flush`).
Thanks
On Fri, Nov 4, 2016 at 3:50 PM, Yuji Ito wrote:
> Thanks Ben,
>
> When I added `nodetool flush` on all nodes after step 2, the problem
> didn't happen.
> D
make sure all
> data is written to sstables rather than relying on commit logs
> 2) Run the test with consistency level quorom rather than serial
> (shouldn’t be any different but quorom is more widely used so maybe there
> is a bug that’s specific to serial)
>
> Cheers
> Ben
y
> PK)? Cassandra basically treats any inserts with the same primary key as
> updates (so 1000 insert operations may not necessarily result in 1000 rows
> in the DB).
>
> On Fri, 21 Oct 2016 at 16:30 Yuji Ito wrote:
>
>> thanks Ben,
>>
>> > 1) At what stage di
is used by the test
> keyspace? What consistency level is used by your operations?
>
>
> Cheers
> Ben
>
> On Fri, 21 Oct 2016 at 13:57 Yuji Ito wrote:
>
>> Thanks Ben,
>>
>> I tried to run a rebuild and repair after the failure node rejoined the
>> c
on
> of running a rebuild or repair still applies.
>
> On Tue, 18 Oct 2016 at 15:45 Yuji Ito wrote:
>
>> Thanks Ben, Jeff
>>
>> Sorry that my explanation confused you.
>>
>> Only node1 is the seed node.
>> Node2 whose C* data is deleted is NOT a seed.
&g
ht way to recover in this scenario is to run a nodetool
> rebuild on Node A after the other two nodes are running. You could
> theoretically also run a repair (which would be good practice after a weird
> failure scenario like this) but rebuild will probably be quicker given you
> know al
ild machines to prevent accidental
>> replacement. You need to tell it to build the “new” machines as a
>> replacement for the “old” machine with that IP by setting
>> -Dcassandra.replace_address_first_boot=. See
>> http://cassandra.apache.org/doc/latest/operating/topo_changes.htm
Hi all,
A failure node can rejoin a cluster.
On the node, all data in /var/lib/cassandra were deleted.
Is it normal?
I can reproduce it as below.
cluster:
- C* 2.2.7
- a cluster has node1, 2, 3
- node1 is a seed
- replication_factor: 3
how to:
1) stop C* process and delete all data in /var/lib/
wrote:
> Nop, still don't get stale values. (I just ran your script 3 times)
>
> On Thu, Aug 25, 2016 at 12:36 PM, Yuji Ito wrote:
>
>> Thank you for testing, Christian
>>
>> What did you set commitlog_sync in cassandra.yaml?
>> I set commitlog_sync batch (win
t; 5. check the table
>>>
>>> key | val
>>> -+--
>>>5 | 1000
>>> 10 | 1000
>>>1 | 1000
>>>8 | 1000
>>>2 | 1000
>>>4 | 1000
>>>7 | 1000
>>>6 | 1000
>>>
are seeing a different issue than I do.
>
> Have you tried with durable_writes=False? If the issue is caused by the
> commitlog, then it should work if you disable durable_writes.
>
> Cheers,
> Christian
>
>
>
> On Tue, Aug 9, 2016 at 3:04 PM, Yuji Ito wrote:
>
ndomly fail out of my 1800. I can see that the
> failed tests sometimes show data from other tests, which I think must be
> because of a failed truncate. This behaviour is seems to be CQL only, or at
> least has gotten worse with CQL. I did not experience this with Thrift.
>
> regard
Hi all,
I have a question about clearing table and commit log replay.
After some tables were truncated consecutively, I got some stale values.
This problem doesn't occur when I clear keyspaces with DROP (and CREATE).
I'm testing the following test with node failure.
Some stale values appear at ch
el.SERIAL
4. the read succeeds. But the result isn't new value
This problem does not always occurs.
Almost read can get the latest value.
Thanks
Yuji
On Mon, Jul 25, 2016 at 4:34 PM, DuyHai Doan wrote:
> Can you outline the detailed steps to reproduce the issue Yuji ?
>
> On Mon,
thanks, Ryan
> are you using one of the SERIAL Consistency Levels?
yes, I use read with ConsistencyLevel.SERIAL.
On Mon, Jul 25, 2016 at 10:10 AM, Ryan Svihla wrote:
> are you using one of the SERIAL Consistency Levels?
>
> --
> Ryan Svihla
>
> On July 24, 2016 at 8:
Hi,
I have another question about CAS operation.
Can a read get stale data after failure in commit phase?
According to the following article,
when a write fails in commit phase (a WriteTimeout with WriteType SIMPLE
happens),
a subsequent read will repair the uncommitted state
and get the latest
nging or tons of compactions + 100% CPU?
>
Was the STCS test done against a table holding as much data as the LCS one?
In STCS setting, cqlsh hanging didn't happen.
This size of tables is almost same as in LCS setting.
Regards,
Yuji
On Wed, Jul 20, 2016 at 7:03 PM, Alain RODRIGUEZ wrote
ug.log was filled by "Choosing candidates for L0".
This problem hasn't occurred in STCS setting.
Thanks,
Yuji Ito
),
reads will force Cassandra to commit the data.
http://www.datastax.com/dev/blog/cassandra-error-handling-done-right
Regards,
Yuji Ito
On Wed, Jun 29, 2016 at 7:03 AM, Tyler Hobbs wrote:
> Reads at CL.SERIAL will complete any in-progress paxos writes, so the
> behavior you're seeing
;1" in the row!!
5. read/results phase fails by ReadTimeoutException caused by failure of
node C
Thanks,
Yuji Ito
41 matches
Mail list logo