.
>> Please feel free to raise a jira and contribute a patch to the documentation.
>>
>> On a side note, why are you implementing your own driver?
>>
>> Thanks,
>>
>> Dinesh
>>
>> On Sun, Aug 11, 2024 at 8:29 AM Vincent Rischmann
>> wrot
Hello,
this may not be the best place to ask this, feel free to redirect me.
I'm working on writing a Cassandra client in my spare time and am currently
implementing the framing that has been added in protocol v5.
I followed the spec available here:
https://github.com/apache/cassandra/blob/tru
o is you see them in the logs? If that's the case, then
> yes, I would do `nodetool assassinate`.
>
>
>
> On Wed, Aug 28, 2019 at 7:33 AM Vincent Rischmann
> wrote:
>> __
>> Hi,
>>
>> while replacing a node in a cluster I saw this log:
>>
Hi,
while replacing a node in a cluster I saw this log:
2019-08-27 16:35:31,439 Gossiper.java:995 - InetAddress /10.15.53.27 is now
DOWN
it caught my attention because that ip address doesn't exist anymore in the
cluster and it hasn't for a long time.
After some reading I ran `nodetool gossi
Hello,
we recently added a new 5 node cluster used only for a single service,
and right now it's not even read from, we're just loading data into it.
Each node are identical: 32Gib of RAM, 4 core Xeon E5-1630, 2 SSDs in
Raid 0, Cassandra v3.11
We have two tables with roughly this schema:
CREATE T
its behavior over a full day. If the
> results are satisfying, generalize to the rest of the cluster. You
> need to experience peak load to make sure the new settings are
> fixing your issues.>
> Cheers,
>
>
>
> On Tue, Jun 6, 2017 at 4:22 PM Vincent Rischmann
> wrot
e batches for writes (although your problem doesn't seem to
> be write related) ?> Can you share the queries from your scheduled selects
> and the
> data model ?>
> Cheers,
>
>
> On Tue, Jun 6, 2017 at 2:33 PM Vincent Rischmann
> wrote:>> __
>> Hi,
Hi,
we have a cluster of 11 nodes running Cassandra 2.2.9 where we regularly
get READ messages dropped:
> READ messages were dropped in last 5000 ms: 974 for internal timeout
> and 0 for cross node timeout
Looking at the logs, some are logged at the same time as Old Gen GCs.
These GCs all take aro
Hi,
I'm using cassandra-reaper
(https://github.com/thelastpickle/cassandra-reaper) to manage repairs of
my Cassandra clusters, probably like a bunch of other people.
When I started using it (it was still the version from the spotify
repository) the UI didn't work well, and the Python cli clien
, Vladimir Yudovin wrote:
> Do you also store events in Cassandra? If yes, why not to add
> "processed" flag to existing table(s), and fetch non-processed events
> with single SELECT?
>
> Best regards, Vladimir Yudovin,
> *Winguzone[1] - Cloud Cassandra Hosti
Hello,
I'm using a table like this:
CREATE TABLE myset (id uuid PRIMARY KEY)
which is basically a set I use for deduplication, id is a unique id for
an event, when I process the event I insert the id, and before
processing I check if it has already been processed for deduplication.
It
Ok, thanks Matija.
On Tue, Feb 21, 2017, at 11:43 AM, Matija Gobec wrote:
> They appear for each repair run and disappear when repair run
> finishes.
>
> On Tue, Feb 21, 2017 at 11:14 AM, Vincent Rischmann
> wrote:
>> __
>> Hi,
>>
>> I upgraded t
Hi,
I upgraded to Cassandra 2.2.8 and noticed something weird in
nodetool tpstats:
Pool NameActive Pending Completed Blocked
All time blocked
MutationStage 0 0 116265693
0 0
ReadStage 1
re when the JVM dies.
>
> I hope that's helpful as there is no easy answer here, and the problem
> should be narrowed down by fixing all potential causes.
>
> Cheers,
>
>
>
>
> On Mon, Nov 21, 2016 at 5:10 PM Vincent Rischmann
> wrote:
>&g
n the logs and high number of tombstone read in
>cfstats
> * Make sure swap is disabled
>
> Cheers,
>
>
> On Mon, Nov 21, 2016 at 2:57 PM Vincent Rischmann
> wrote:
>> __
>> @Vladimir
>>
>> We tried with 12Gb and 16Gb, the problem appeare
0 (e.g. 60-70% of physical
>> memory).
>> Also how many tables do you have across all keyspaces? Each table can
>> consume minimum 1M of Java heap.
>>
>> Best regards, Vladimir Yudovin,
>> *Winguzone[1] - Hosted Cloud Cassandra Launch your cluster in
>>
Hello,
we have a 8 node Cassandra 2.1.15 cluster at work which is giving us a
lot of trouble lately.
The problem is simple: nodes regularly die because of an out of memory
exception or the Linux OOM killer decides to kill the process.
For a couple of weeks now we increased the heap to 20Gb hop
use of big partitions, so it's definitely good to
know, and I'll definitely work on reducing partition sizes.
On Fri, Oct 28, 2016, at 06:32 PM, Edward Capriolo wrote:
>
>
> On Fri, Oct 28, 2016 at 11:21 AM, Vincent Rischmann
> wrote:
>> __
>> Doesn't paging h
Doesn't paging help with this ? Also if we select a range via the
cluster key we're never really selecting the full partition. Or is
that wrong ?
On Fri, Oct 28, 2016, at 05:00 PM, Edward Capriolo wrote:
> Big partitions are an anti-pattern here is why:
>
> First Cassandra is not an analytic data
ing incremental repair on a regular basis once you
> started as you'll have two separate pools of sstables (repaired and
> unrepaired) that won't get compacted together, which could be a
> problem if you want tombstones to be purged efficiently.
> Cheers,
>
> Le jeu. 27
t; One last thing : can you check if you have particularly big partitions
> in the CFs that fail to get repaired ? You can run nodetool
> cfhistograms to check that.
>
> Cheers,
>
>
>
> On Thu, Oct 27, 2016 at 5:24 PM Vincent Rischmann
> wrote:
>> __
>> Thanks fo
ave you
>from a very bad first experience with incremental repair.
>Furthermore, make sure you run repair daily after your first inc
>repair run, in order to work on small sized repairs.
>
> Cheers,
>
>
> On Thu, Oct 27, 2016 at 4:27 PM Vincent Rischmann
>
Hi,
we have two Cassandra 2.1.15 clusters at work and are having some
trouble with repairs.
Each cluster has 9 nodes, and the amount of data is not gigantic but
some column families have 300+Gb of data.
We tried to use `nodetool repair` for these tables but at the time we
tested it, it made the w
23 matches
Mail list logo