ne
> > node is somehow behind) so if you dont run repairs maybe it is just
> > somehow unsychronized but that is really just my guess.
> >
> > On Wed, 17 Apr 2019 at 21:39, Martin Mačura wrote:
> > >
> > > We cannot run any repairs on these tables. When
am not sure why it is called to be
> honest.
>
> Is all fine with db as such? Do you run repairs? Does that number
> increses or decreases over time? Has repair or compaction some effect
> on it?
>
> On Wed, 17 Apr 2019 at 20:48, Martin Mačura wrote:
> >
> >
tive_retry = 'NONE';
On Wed, Apr 17, 2019 at 12:25 PM Stefan Miklosovic <
stefan.mikloso...@instaclustr.com> wrote:
> What is your bloom_filter_fp_chance for either table? I guess it is
> bigger for the first one, bigger that number is between 0 and 1, less
> memory it will
Hi,
I have a table with poor bloom filter false ratio:
SSTable count: 1223
Space used (live): 726.58 GiB
Number of partitions (estimate): 8592749
Bloom filter false positives: 35796352
Bloom filter false ratio: 0.68472
Most partitions in our dataset span one or two SSTables at most. But
there might be a few that span hundreds of SSTables. If I located and
deleted them (partition-level tombstone), would this fix the issue?
Thanks,
Martin
On Mon, Sep 24, 2018 at 1:08 PM Jeff Jirsa wrote:
>
>
>
>
> On Sep 24, 2
Hi,
I can confirm the same issue in Cassandra 3.11.2.
As an example: a TWCS table that normally has 800 SSTables (2 years'
worth of daily windows plus some anticompactions) will peak at
anywhere from 15k to 50k SSTables during a subrange repair.
Regards,
Martin
On Mon, Sep 24, 2018 at 9:34 AM
gt;
> I believe (and hope) this information is relevant to help you fix this issue.
>
> C*heers,
> ---
> Alain Rodriguez - @arodream - al...@thelastpickle.com
> France / Spain
>
> The Last Pickle - Apache Cassandra Consulting
> http://www.thelastpickle.com
>
&
Hi,
we're on cassandra 3.11.2 . During an anticompaction after repair,
TotalDiskSpaceUsed value of one table gradually went from 700GB to
1180GB, and then suddenly jumped back to 700GB. This happened on all
nodes involved in the repair. There was no change in PercentRepaired
during or after this p
I am using this tool with 3.11, had to modify it to make it usable:
https://github.com/BrianGallew/cassandra_range_repair/pull/60
Martin
On Tue, Jul 31, 2018 at 3:44 PM Jean Carlo wrote:
>
> Hello everyone,
>
> I am just wondering if someone is using this tool to make repairs in
> cassandra 3.
tamp, ---
> compaction will write to new temp file with _tmplink...,
>
> use sstablemetadata ... look the largest or oldest one first
>
> of course, other factors may be, like disk space, etc
> also what are compaction_throughput_mb_per_sec in cassandra.yaml
>
> Hop
Hi,
we have a table which is being compacted all the time, with no change in size:
Compaction History:
compacted_atbytes_inbytes_out rows_merged
2018-07-25T05:26:48.101 57248063878 57248063878 {1:11655}
2018-07-25T01:09:47.346 57248063878 57248063878
{1:11655}
Hi,
we've had this issue with large partitions (100 MB and more). Use
nodetool tablehistograms to find partition sizes for each table.
If you have enough heap space to spare, try increasing this parameter:
file_cache_size_in_mb: 512
There's also the following parameter, but I did not test the im
brought up (with nodetool rebuild). We cannot do a repair
across datacenters, because nodes in the old DC would run out of disk
space.
Regards,
Martin
On Tue, Jun 5, 2018 at 6:06 PM, Martin Mačura wrote:
> Hi,
> we're on cassandra 3.11.2, and we're having some issues with re
Hi,
we're on cassandra 3.11.2, and we're having some issues with repairs.
They take ages to complete, and some time ago the incremental repair
stopped working - that is, SSTables are not being marked as repaired,
even though the repair reports success.
Running a full or incremental repair does not
Nevermind, we resolved the issue JVM heap settings were misconfigured
Martin
On Fri, Mar 23, 2018 at 1:18 PM, Martin Mačura wrote:
> Hi all,
>
> We have a cluster of 3 nodes with RF 3 that ran fine until we upgraded
> it to 3.11.2.
>
> Each node has 32 GB RAM, 8 GB C
Hi all,
We have a cluster of 3 nodes with RF 3 that ran fine until we upgraded
it to 3.11.2.
Each node has 32 GB RAM, 8 GB Cassandra heap size.
After the upgrade, clients started reporting connection issues:
cassandra | [ERROR] Closing established connection pool to host
because of the followi
Thanks for the tips, Alan. The cluster is entirely healthy. But the
connection between DCs is a VPN, managed by a third party - it is
possible it might be flaky. However, I would expect the rebuild job to
be able to recover from connection timeout/reset type of errors
without a need for manual int
; it's previously been sent by the same session? Search the logs for the file
> that failed and post back any exceptions.
>
> On 29 December 2017 at 10:18, Martin Mačura wrote:
>>
>> Is this something that can be resolved by CASSANDRA-11841 ?
>>
>> Thanks,
>&
Is this something that can be resolved by CASSANDRA-11841 ?
Thanks,
Martin
On Thu, Dec 21, 2017 at 3:02 PM, Martin Mačura wrote:
> Hi all,
> we are trying to add a new datacenter to the existing cluster, but the
> 'nodetool rebuild' command always fails after a couple of
Hi all,
we are trying to add a new datacenter to the existing cluster, but the
'nodetool rebuild' command always fails after a couple of hours.
We're on Cassandra 3.9.
Example 1:
172.24.16.169 INFO [STREAM-IN-/172.25.16.125:55735] 2017-12-13
23:55:38,840 StreamResultFuture.java:174 - [Stream
#b
20 matches
Mail list logo