Hi,
just as a short follow up, it worked - all nodes now have 20-30 sstables
instead of thousands.
Cheers,
Roland
Hi,
I overlooked that disussion. Indeed right now those column family has
very much write load and only very minimal reads on it (but there are
reads). I run compactionstats serveral times over the last days and
sometimes a small bunch of tables gets compacted, about 10-15. After
that there a
There's another thread going on right now in this list about compactions
not happening when they seemingly should. Tyler Hobbs postulates a bug and
workaround for it, so maybe try that out, and if that fixes anything for
you, certainly let him know. The bug Tyler postulates on is triggered when
y
Hi Eric and all,
I almost expected this kind answer. I did a nodetool compactionstats
already to see if those sstables are beeing compacted, but on all nodes
there are 0 outstanding compactions (right now in the morning, not
running any tests on this cluster).
The reported read latency is ab
Yes, many sstables can have a huge negative impact read performance, and
will also create memory pressure on that node.
There are a lot of things which can produce this effect, and it strongly
also suggests you're falling behind on compaction in general (check
nodetool compactionstats, you should
Hi,
I'm testing around with cassandra fair a bit, using 2.1.2 which I know
has some major issues,but it is a test environment. After some bulk
loading, testing with incremental repairs and running out of heap once I
found that now I have a quit large number of sstables which are really
small: