Re: Bloom filter false positives high

2019-05-16 Thread Martin Mačura
ne > > node is somehow behind) so if you dont run repairs maybe it is just > > somehow unsychronized but that is really just my guess. > > > > On Wed, 17 Apr 2019 at 21:39, Martin Mačura wrote: > > > > > > We cannot run any repairs on these tables. When

Re: Bloom filter false positives high

2019-04-17 Thread Martin Mačura
am not sure why it is called to be > honest. > > Is all fine with db as such? Do you run repairs? Does that number > increses or decreases over time? Has repair or compaction some effect > on it? > > On Wed, 17 Apr 2019 at 20:48, Martin Mačura wrote: > > > >

Re: Bloom filter false positives high

2019-04-17 Thread Martin Mačura
tive_retry = 'NONE'; On Wed, Apr 17, 2019 at 12:25 PM Stefan Miklosovic < stefan.mikloso...@instaclustr.com> wrote: > What is your bloom_filter_fp_chance for either table? I guess it is > bigger for the first one, bigger that number is between 0 and 1, less > memory it will

Bloom filter false positives high

2019-04-17 Thread Martin Mačura
Hi, I have a table with poor bloom filter false ratio: SSTable count: 1223 Space used (live): 726.58 GiB Number of partitions (estimate): 8592749 Bloom filter false positives: 35796352 Bloom filter false ratio: 0.68472

Re: TWCS + subrange repair = excessive re-compaction?

2018-09-25 Thread Martin Mačura
Most partitions in our dataset span one or two SSTables at most. But there might be a few that span hundreds of SSTables. If I located and deleted them (partition-level tombstone), would this fix the issue? Thanks, Martin On Mon, Sep 24, 2018 at 1:08 PM Jeff Jirsa wrote: > > > > > On Sep 24, 2

Re: TWCS + subrange repair = excessive re-compaction?

2018-09-24 Thread Martin Mačura
Hi, I can confirm the same issue in Cassandra 3.11.2. As an example: a TWCS table that normally has 800 SSTables (2 years' worth of daily windows plus some anticompactions) will peak at anywhere from 15k to 50k SSTables during a subrange repair. Regards, Martin On Mon, Sep 24, 2018 at 9:34 AM

Re: Anticompaction causing significant increase in disk usage

2018-09-12 Thread Martin Mačura
gt; > I believe (and hope) this information is relevant to help you fix this issue. > > C*heers, > --- > Alain Rodriguez - @arodream - al...@thelastpickle.com > France / Spain > > The Last Pickle - Apache Cassandra Consulting > http://www.thelastpickle.com > &

Anticompaction causing significant increase in disk usage

2018-09-12 Thread Martin Mačura
Hi, we're on cassandra 3.11.2 . During an anticompaction after repair, TotalDiskSpaceUsed value of one table gradually went from 700GB to 1180GB, and then suddenly jumped back to 700GB. This happened on all nodes involved in the repair. There was no change in PercentRepaired during or after this p

Re: Cassandra 3.11 and subrange repairs

2018-07-31 Thread Martin Mačura
I am using this tool with 3.11, had to modify it to make it usable: https://github.com/BrianGallew/cassandra_range_repair/pull/60 Martin On Tue, Jul 31, 2018 at 3:44 PM Jean Carlo wrote: > > Hello everyone, > > I am just wondering if someone is using this tool to make repairs in > cassandra 3.

Re: Infinite loop of single SSTable compactions

2018-07-30 Thread Martin Mačura
tamp, --- > compaction will write to new temp file with _tmplink..., > > use sstablemetadata ... look the largest or oldest one first > > of course, other factors may be, like disk space, etc > also what are compaction_throughput_mb_per_sec in cassandra.yaml > > Hop

Infinite loop of single SSTable compactions

2018-07-25 Thread Martin Mačura
Hi, we have a table which is being compacted all the time, with no change in size: Compaction History: compacted_atbytes_inbytes_out rows_merged 2018-07-25T05:26:48.101 57248063878 57248063878 {1:11655} 2018-07-25T01:09:47.346 57248063878 57248063878 {1:11655}

Re: How to identify which table causing Maximum Memory usage limit

2018-06-11 Thread Martin Mačura
Hi, we've had this issue with large partitions (100 MB and more). Use nodetool tablehistograms to find partition sizes for each table. If you have enough heap space to spare, try increasing this parameter: file_cache_size_in_mb: 512 There's also the following parameter, but I did not test the im

Re: Repair slow, "Percent repaired" never updated

2018-06-06 Thread Martin Mačura
brought up (with nodetool rebuild). We cannot do a repair across datacenters, because nodes in the old DC would run out of disk space. Regards, Martin On Tue, Jun 5, 2018 at 6:06 PM, Martin Mačura wrote: > Hi, > we're on cassandra 3.11.2, and we're having some issues with re

Repair slow, "Percent repaired" never updated

2018-06-05 Thread Martin Mačura
Hi, we're on cassandra 3.11.2, and we're having some issues with repairs. They take ages to complete, and some time ago the incremental repair stopped working - that is, SSTables are not being marked as repaired, even though the repair reports success. Running a full or incremental repair does not

Re: Nodes unresponsive after upgrade 3.9 -> 3.11.2

2018-03-23 Thread Martin Mačura
Nevermind, we resolved the issue JVM heap settings were misconfigured Martin On Fri, Mar 23, 2018 at 1:18 PM, Martin Mačura wrote: > Hi all, > > We have a cluster of 3 nodes with RF 3 that ran fine until we upgraded > it to 3.11.2. > > Each node has 32 GB RAM, 8 GB C

Nodes unresponsive after upgrade 3.9 -> 3.11.2

2018-03-23 Thread Martin Mačura
Hi all, We have a cluster of 3 nodes with RF 3 that ran fine until we upgraded it to 3.11.2. Each node has 32 GB RAM, 8 GB Cassandra heap size. After the upgrade, clients started reporting connection issues: cassandra | [ERROR] Closing established connection pool to host because of the followi

Re: Rebuild to a new DC fails every time

2018-01-11 Thread Martin Mačura
Thanks for the tips, Alan. The cluster is entirely healthy. But the connection between DCs is a VPN, managed by a third party - it is possible it might be flaky. However, I would expect the rebuild job to be able to recover from connection timeout/reset type of errors without a need for manual int

Re: Rebuild to a new DC fails every time

2018-01-08 Thread Martin Mačura
; it's previously been sent by the same session? Search the logs for the file > that failed and post back any exceptions. > > On 29 December 2017 at 10:18, Martin Mačura wrote: >> >> Is this something that can be resolved by CASSANDRA-11841 ? >> >> Thanks, >&

Re: Rebuild to a new DC fails every time

2017-12-29 Thread Martin Mačura
Is this something that can be resolved by CASSANDRA-11841 ? Thanks, Martin On Thu, Dec 21, 2017 at 3:02 PM, Martin Mačura wrote: > Hi all, > we are trying to add a new datacenter to the existing cluster, but the > 'nodetool rebuild' command always fails after a couple of

Rebuild to a new DC fails every time

2017-12-21 Thread Martin Mačura
Hi all, we are trying to add a new datacenter to the existing cluster, but the 'nodetool rebuild' command always fails after a couple of hours. We're on Cassandra 3.9. Example 1: 172.24.16.169 INFO [STREAM-IN-/172.25.16.125:55735] 2017-12-13 23:55:38,840 StreamResultFuture.java:174 - [Stream #b