https://issues.apache.org/jira/browse/CASSANDRA-2575
On Thu, Apr 21, 2011 at 11:56 PM, Jonathan Ellis wrote:
> I suggest as a workaround making the forceUserDefinedCompaction method
> ignore disk space estimates and attempt the requested compaction even
> if it guesses it will not have enough spa
I think the really interesting part is how this node ended up in this state
in the first place.
There should be somewhere in the area of 340-500GB of data on it in when
everything is 100% compacted.
Problem now is that it used (we wiped it last night to test some 0.8 stuff)
more then 1TB.
To me,
I suggest as a workaround making the forceUserDefinedCompaction method
ignore disk space estimates and attempt the requested compaction even
if it guesses it will not have enough space. This would allow you to
submit the 2 sstables you want manually.
On Thu, Apr 21, 2011 at 8:34 AM, Shotaro Kamio
Running at 78% disk capacity is somewhat out there on the edge.
The CompactionManager is showing that compactions are backing up. I'm guessing
this has to do with the minor compactions not been able compact the list of
files they want to, so it cannot reduce the number of files each compaction
Hi Aaron,
Maybe, my previous description was not good. It's not a compaction
threshold problem.
In fact, Cassandra tries to compact 7 sstables in the minor
compaction. But it decreases the number of sstables one by one due to
insufficient disk space. At the end, it compacts a single file as in
th
Want to check if you are talking about minor compactions or major (nodetool)
compactions.
What settings compaction settings do you have for this CF ? You can increase
the min compaction threshold and reduce the frequency of compactions
http://wiki.apache.org/cassandra/StorageConfiguration
It se