Hello,
I can also confirm the same problem described by Joe Ryner in 14.2.2. and
Oliver Freyermuth.
My ceph version is 14.2.4
-----------------------------------------------------
# ceph health detail
HEALTH_WARN 1 subtrees have overcommitted pool target_size_bytes; 1 subtrees
have overcommitted pool target_size_ratio
POOL_TARGET_SIZE_BYTES_OVERCOMMITTED 1 subtrees have overcommitted pool
target_size_bytes
Pools ['volumes', 'backups', 'images', 'cephfs_cindercache', 'rbd', 'vms']
overcommit available storage by 1.308x due to target_size_bytes 0 on pools
[]
POOL_TARGET_SIZE_RATIO_OVERCOMMITTED 1 subtrees have overcommitted pool
target_size_ratio
Pools ['volumes', 'backups', 'images', 'cephfs_cindercache', 'rbd', 'vms']
overcommit available storage by 1.308x due to target_size_ratio 0.000 on pools
[]
-----------------------------------------------------
-----------------------------------------------------
# ceph df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 659 TiB 371 TiB 287 TiB 288 TiB 43.71
ssd 67 TiB 56 TiB 11 TiB 11 TiB 16.47
TOTAL 726 TiB 427 TiB 298 TiB 299 TiB 41.19
POOLS:
POOL ID STORED OBJECTS USED
%USED MAX AVAIL
volumes 5 87 TiB 22.94M 261 TiB
50.63 85 TiB
backups 6 0 B 0 0 B
0 127 TiB
images 7 8.6 TiB 2.26M 26 TiB
9.21 85 TiB
fastvolumes 9 3.7 TiB 1.93M 11 TiB
18.67 16 TiB
cephfs_cindercache 10 0 B 0 0 B
0 85 TiB
cephfs_cindercache_metadata 11 312 KiB 102 1.3 MiB
0 16 TiB
rbd 12 0 B 0 0 B
0 85 TiB
vms 13 0 B 0 0 B
0 85 TiB
-----------------------------------------------------
-----------------------------------------------------
# ceph osd pool autoscale-status
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO
TARGET RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
cephfs_cindercache_metadata 1300k 3.0 68930G 0.0000
1.0 4 on
fastvolumes 11275G 3.0 68930G 0.4907
1.0 128 on
cephfs_cindercache 0 3.0 658.5T 0.0000
1.0 4 on
volumes 261.2T 3.0 658.5T 1.1898
1.0 2048 on
images 26455G 3.0 658.5T 0.1177
1.0 128 on
backups 0 2.0 658.5T 0.0000
1.0 4 on
rbd 0 3.0 658.5T 0.0000
1.0 4 on
vms 0 3.0 658.5T 0.0000
1.0 4 on
-----------------------------------------------------
Best regards
Björn
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com