[
https://issues.apache.org/jira/browse/CASSANDRA-21245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18067993#comment-18067993
]
Dmitry Konstantinov edited comment on CASSANDRA-21245 at 3/24/26 1:21 PM:
--------------------------------------------------------------------------
It could be related to CASSANDRA-17931
a logic was added to account the space used by the current active compactions
as occupied
The following methods are used to calculate it:
*
org.apache.cassandra.db.compaction.ActiveCompactions#estimatedRemainingWriteBytes
*
org.apache.cassandra.db.compaction.CompactionInfo#estimatedRemainingWriteBytes
Where estimatedRemainingWriteBytes is calculated as (total - completed) for
active compactions and we have c73b9e00-2451-11f1-9e8a-9937f6f9033a with 136.33
GiB - 58.16 GiB = 78.17 GiB... (it is interesting how do we get 136.33 GiB as
total if the disk size is 49G)
To be continued..
was (Author: dnk):
It could be related to CASSANDRA-17931
a logic was added to account the space used by the current active compactions
as occupied
The following methods are used to calculate it:
*
org.apache.cassandra.db.compaction.ActiveCompactions#estimatedRemainingWriteBytes
*
org.apache.cassandra.db.compaction.CompactionInfo#estimatedRemainingWriteBytes
Where estimatedRemainingWriteBytes is calculated as (total - completed) for
active compactions and we have c73b9e00-2451-11f1-9e8a-9937f6f9033a with 136.33
GiB - 58.16 GiB = 78.17 GiB...
To be continued..
> Uncompressed size is being used for compressed tables in maintenance
> operations
> -------------------------------------------------------------------------------
>
> Key: CASSANDRA-21245
> URL: https://issues.apache.org/jira/browse/CASSANDRA-21245
> Project: Apache Cassandra
> Issue Type: Bug
> Reporter: vasya b
> Priority: Normal
> Attachments: cassandra.yaml, debug.log, go.mod, main.go
>
>
> Using a compressed table can lead to a state where uncompressed table size is
> bigger
> than the whole volume size, while it shouldn't be a problem it leads to
> compaction poblems, as other compaction task are being denied.
> f.e. one can easily create a compressed table with
> {code:java}
> CREATE TABLE IF NOT EXISTS ` + keyspace + `.` + table + ` (
> pk bigint,
> data text,
> PRIMARY KEY (pk)
> ) WITH compression = {'class': 'DeflateCompressor'}
> {code}
> (or any other compression)
> Insert a bunch of data which is going to be compressed quite well (f.e. a
> bunch of zeros or any other single char * 1024 *1024) * N where N is some big
> number
> one would expect that such table will be compacted without any problems.
> but one will get:
> {code:java}
> WARN [CompactionExecutor:8] 2026-03-20 14:25:17,256 CompactionTask.java:434
> - Not enough space for compaction (78c5c7c0-244f-11f1-9e8a-9937f6f9033a) of
> zerotohero.bulk_data, estimated sstables = 1, expected write size = 2686214
> ERROR [CompactionExecutor:8] 2026-03-20 14:25:17,256
> JVMStabilityInspector.java:70 - Exception in thread
> Thread[CompactionExecutor:8,5,CompactionExecutor]
> java.lang.RuntimeException: Not enough space for compaction
> (78c5c7c0-244f-11f1-9e8a-9937f6f9033a) of zerotohero.bulk_data, estimated
> sstables = 1, expected write size = 2686214
> at
> org.apache.cassandra.db.compaction.CompactionTask.buildCompactionCandidatesForAvailableDiskSpace(CompactionTask.java:436)
> at
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:148)
> at
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:26)
> at
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:94)
> at
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:100)
> at
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:374)
> at
> org.apache.cassandra.concurrent.FutureTask$3.call(FutureTask.java:141)
> at org.apache.cassandra.concurrent.FutureTask.call(FutureTask.java:61)
> at org.apache.cassandra.concurrent.FutureTask.run(FutureTask.java:71)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
> at
> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
> at java.base/java.lang.Thread.run(Thread.java:840)
> WARN [CompactionExecutor:9] 2026-03-20 15:21:27,534 CompactionTask.java:440
> - Not enough space for compaction 51834c70-2457-11f1-9e8a-9937f6f9033a,
> 71.31226MiB estimated. Reducing scope.
> DEBUG [CompactionExecutor:8] 2026-03-20 15:25:27,447 Directories.java:550 -
> FileStore /mnt/cassandra (/dev/sdb1) has 36221336371 bytes available,
> checking if we can write 95979501107 bytes
> WARN [CompactionExecutor:8] 2026-03-20 15:25:27,447 Directories.java:553 -
> FileStore /mnt/cassandra (/dev/sdb1) has only 33.73 GiB available, but 89.39
> GiB is needed
> WARN [CompactionExecutor:8] 2026-03-20 15:25:27,447 CompactionTask.java:434
> - Not enough space for compaction (e08281c0-2457-11f1-9e8a-9937f6f9033a) of
> zerotohero.bulk_data_zstd, estimated sstables = 1, expected write size =
> 2195528
> ERROR [CompactionExecutor:8] 2026-03-20 15:25:27,447
> JVMStabilityInspector.java:70 - Exception in thread
> Thread[CompactionExecutor:8,5,CompactionExecutor]
> java.lang.RuntimeException: Not enough space for compaction
> (e08281c0-2457-11f1-9e8a-9937f6f9033a) of zerotohero.bulk_data_zstd,
> estimated sstables = 1, expected write size = 2195528
> at
> org.apache.cassandra.db.compaction.CompactionTask.buildCompactionCandidatesForAvailableDiskSpace(CompactionTask.java:436)
> at
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:148)
> at
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:26)
> at
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:94)
> at
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:100)
> at
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:374)
> at
> org.apache.cassandra.concurrent.FutureTask$3.call(FutureTask.java:141)
> at org.apache.cassandra.concurrent.FutureTask.call(FutureTask.java:61)
> at org.apache.cassandra.concurrent.FutureTask.run(FutureTask.java:71)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
> at
> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
> at java.base/java.lang.Thread.run(Thread.java:840)
> DEBUG [CompactionExecutor:8] 2026-03-20 15:25:27,447 HeapUtils.java:133 -
> Heap dump creation on uncaught exceptions is disabled.
> ERROR [CompactionExecutor:12] 2026-03-20 16:09:13,125
> JVMStabilityInspector.java:70 - Exception in thread
> Thread[CompactionExecutor:12,5,CompactionExecutor]
> java.lang.RuntimeException: Not enough space for compaction
> (fd9ba330-245d-11f1-9e8a-9937f6f9033a) of zerotohero.bulk_data_deflate,
> estimated sstables = 1, expected write size = 2200455
> at
> org.apache.cassandra.db.compaction.CompactionTask.buildCompactionCandidatesForAvailableDiskSpace(CompactionTask.java:436)
> at
> org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:148)
> at
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:26)
> at
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:94)
> at
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:100)
> at
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:374)
> at
> org.apache.cassandra.concurrent.FutureTask$3.call(FutureTask.java:141)
> at org.apache.cassandra.concurrent.FutureTask.call(FutureTask.java:61)
> at org.apache.cassandra.concurrent.FutureTask.run(FutureTask.java:71)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
> at
> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
> at java.base/java.lang.Thread.run(Thread.java:840)
> DEBUG [CompactionExecutor:12] 2026-03-20 16:09:13,126 HeapUtils.java:133 -
> Heap dump creation on uncaught exceptions is disabled. {code}
> while there is enough space:
> {code:java}
> root@zerotohero:~# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 49G 4.5G 42G 10% /mnt/cassandra
> root@zerotohero:~# nodetool compactionstats -H
> concurrent compactors 2
>
> pending tasks 31
>
> zerotohero bulk_data 31
>
> compactions completed 5009
>
> data compacted 1.1 TiB
>
> compactions aborted 174
>
> compactions reduced 1
>
> sstables dropped from compaction 51
>
> 15 minute rate 5.33/minute
>
> mean rate 1226.87/hour
>
> compaction throughput (MiB/s) 64.0
>
> id compaction type keyspace table
> completed total unit progress
> c73b9e00-2451-11f1-9e8a-9937f6f9033a Compaction zerotohero bulk_data
> 58.16 GiB 136.33 GiB bytes 42.66%
> active compaction remaining time 0h20m50s {code}
> simple reproducer in go included
> cassandra version tested: 5.0.6, 5.0.7
> 4.1.11 works fine
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]