Hi Guillaume,
your commit 4941d09
https://github.com/ceph/ceph/commit/4941d098e337f2b7ad8c6f7c90be3ae252d22f7b
introduces this bug.
Please have a look at https://tracker.ceph.com/issues/72696
Especially the changes to src/ceph-volume/ceph_volume/util/device.py
were causing this. After I reverted them I get the expected behaviour.
Am 07.08.25 um 4:18 PM schrieb Robert Sander:
we have OSD nodes currently consisting of two 605GB SSDs and six 18TB
HDDs. The hosts have room for twelve HDDs.
We created a drivegroup spec that looks like this:
spec:
block_db_size: 100GB
data_devices:
rotational: true
size: '18TB:'
db_devices:
rotational: false
size: '550GB:650GB'
db_slots: 6
Initially this creates 6 OSDs with their RocksDB+WAL on the SSDs,
3 each which is nice for load balancing.
But when we add another HDD it gets a 17.9TB data volume and a 100GB DB
volume, both on the HDD:
sdm
8:192 0 18T 0 disk
├─ceph--846e1a59--aff6--4ef8--9b71--de7241531677-osd--
block--026e8cef--123d--47d9--9b30--211f94edf96c 252:16 0 17.9T 0 lvm
└─ceph--846e1a59--aff6--4ef8--9b71--de7241531677-osd--db--88c47d0b--
f5c6--4cec--8909--c5f8036ca459 252:17 0 100G 0 lvm
I would have assumed that the remaining 305GB on the SSDs would be used.
Regards
--
Robert Sander
Linux Consultant
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: +49 30 405051 - 0
Fax: +49 30 405051 - 19
Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]