Hi Jeff/Jon et al, here is what I'm thinking to do to clean up, please lmk
what you think.
This is precisely my problem I believe:
http://thelastpickle.com/blog/2017/12/14/should-you-use-incremental-repair.html
With this I have a lot of wasted space due to a bad incremental repair. So
I am think
In fact all of them say Repaired at: 0.
On Tue, Aug 7, 2018 at 9:13 PM Brian Spindler
wrote:
> Hi, I spot checked a couple of the files that were ~200MB and the mostly
> had "Repaired at: 0" so maybe that's not it?
>
> -B
>
>
> On Tue, Aug 7, 2018 at 8:16 PM wrote:
>
>> Everything is ttl’d
>>
>
Hi, I spot checked a couple of the files that were ~200MB and the mostly
had "Repaired at: 0" so maybe that's not it?
-B
On Tue, Aug 7, 2018 at 8:16 PM wrote:
> Everything is ttl’d
>
> I suppose I could use sstablemeta to see the repaired bit, could I just
> set that to unrepaired somehow and
Everything is ttl’d
I suppose I could use sstablemeta to see the repaired bit, could I just set
that to unrepaired somehow and that would fix?
Thanks!
> On Aug 7, 2018, at 8:12 PM, Jeff Jirsa wrote:
>
> May be worth seeing if any of the sstables got promoted to repaired - if so
> they’re n
May be worth seeing if any of the sstables got promoted to repaired - if so
they’re not eligible for compaction with unrepaired sstables and that could
explain some higher counts
Do you actually do deletes or is everything ttl’d?
--
Jeff Jirsa
> On Aug 7, 2018, at 5:09 PM, Brian Spindler
Hi Jeff, mostly lots of little files, like there will be 4-5 that are
1-1.5gb or so and then many at 5-50MB and many at 40-50MB each.
Re incremental repair; Yes one of my engineers started an incremental
repair on this column family that we had to abort. In fact, the node that
the repair was init
You could toggle off the tombstone compaction to see if that helps, but that
should be lower priority than normal compactions
Are the lots-of-little-files from memtable flushes or repair/anticompaction?
Do you do normal deletes? Did you try to run Incremental repair?
--
Jeff Jirsa
> On Aug
Hi Jonathan, both I believe.
The window size is 1 day, full settings:
AND compaction = {'timestamp_resolution': 'MILLISECONDS',
'unchecked_tombstone_compaction': 'true', 'compaction_window_size': '1',
'compaction_window_unit': 'DAYS', 'tombstone_compaction_interval': '86400',
'tombstone_thresh
What's your window size?
When you say backed up, how are you measuring that? Are there pending
tasks or do you just see more files than you expect?
On Tue, Aug 7, 2018 at 4:38 PM Brian Spindler
wrote:
> Hey guys, quick question:
>
> I've got a v2.1 cassandra cluster, 12 nodes on aws i3.2xl, co
Hey guys, quick question:
I've got a v2.1 cassandra cluster, 12 nodes on aws i3.2xl, commit log on
one drive, data on nvme. That was working very well, it's a ts db and has
been accumulating data for about 4weeks.
The nodes have increased in load and compaction seems to be falling
behind. I use
10 matches
Mail list logo