Hi Anthony
Appreciate you taking the time to provide so much guidance
All your messages in this mailing list are well documented and VERY helpful
I have attached a text file with the output of the commands you mentioned
You are right , there is no /dev/bluefs_/bluefsstore_bdev pointing to the
NVME namespaces
No specific reason to use separate WAL , just remembered it was recommended
back in the "mimic" days ( yes, I am a bit "rusty")
I am using 4+2 EC
I have 2 separate NVMEs ( 1.6 TB each) dedicated to DB/WAL for HDD
I want to use one for the 6 HDD that I currently have , and save the other
for when I will be adding more HDDs
The servers are SuperMicro SSG-641E with 2 x Intel GOLD 6530 ( 32 cores
each) and 1TB RAM
My plan was/is to use the 3 x 15TB NVME on each server for high performance
pools (like metadata for cephfs or index for RGW)
I have carved them in 3 ( 5 TB each) so I can deploy more PGs hence
increase performance
The SSD are meant to be used for RBD ( proxmox )
The HDD are meant to be used for archiving data using S3 and cephfs.
I can redeploy - please clarify 2 things about osd spec file
1. can I use NVME namespaces ( created with nvme command - see below) or
should I let ceph partition the NVME disk ?
in either case, how do you specify WHICH device/NVME to use for
DB/WAL ( as I have 2 identical ones and want to keep the second one for
adding more HDD in the future )
nvme create-ns $device --nsze=$per_ns_blocks --ncap=$per_ns_blocks
--flbas=0 --dps=0
nvme attach-ns $device --namespace-id=$i --controllers=`nvme
id-ctrl $device
2. to deploy 3 different types of OSDs using the spec I have with the
advice from you , I am guessing this is the correct spec
service_type: osd
service_id: hdd_osd
crush_device_class: hdd_class
placement:
host_pattern: *
spec:
data_devices
rotational: 1
db_devices:
rotational: 0
size: 1000G:1600G
filter_logic: AND
objectstore: bluestore
service_id: ssd_osd
crush_device_class: ssd_class
placement:
host_pattern: *
spec:
data_devices
rotational: 0
size: 6000G:8000G
service_id: nvme_osd
crush_device_class: nvme_class
placement:
host_pattern: *
spec:
data_devices
rotational: 0
size: 4000G:5500G
Many thanks
Steven
On Sun, 29 Jun 2025 at 17:45, Anthony D'Atri <[email protected]> wrote:
> So you have NVMe SSD OSDs, SATA SSD OSDs, and HDD OSDs with offload onto
> NVNe SSDs.
>
> Did you have a specific reason to explicitly specify wal_devices_. It’s
> usually fine to just run with the default WAL size, with the WAL colocated
> with the DB. And thus give your DB partitions a bit more space.
>
> What are your use-cases for these three classes of OSDs? Looks like you
> have 42x 20T HDD OSDs, 63x NVMe OSDs, and 84x 7.6T SATA SSD OSDs?
> Apparently with the 15T SSDs divided into 3x OSDs each? How much CPU do
> you have on these nodes? Any specific reason to have chopped up the NVMe
> SSDs into thirds?
>
> It looks to me as though your .mgr pool is using the default
> replicated_rule, which does not specify a device class. This will confound
> the balancer and if enabled the pg_autoscaler.
> I recommend changing the .mgr pool to use the CRUSH rule that the
> non-buckets.data pools use, which should be one that specifies
> 3-replication constrained to one of the SSD device classes. As it is the
> .mgr pool may be placed on any of the three device classes, which is
> trivial with respect to space, but confounds as I mentioned.
>
> Or you could manually edit the CRUSH map and change the #0 replicated_rule
> to specify nvme_class but it sounds like you’re new to Ceph and I don’t
> want to frighten you with that process which unfortunately is still
> old-school. Changing the rule as I suggested will be much safer.
>
> The numbers look like you have all of the RGW pools except buckets.data on
> the nvme_class SSDs, which is fine, but you won’t begin to use all their
> capacity, the index pool will maybe use 5-10% of the capacity used by your
> buckets.data pool over time, depending on your distribution of object sizes
> and the replication strategy of your buckets.data pool. Doing the math
> I’ll speculate that your buckets.data pool is using a … EC 5+2 profile?
> True? If so I might suggest rebuilding if/while you still can. There are
> distinct advantages to having EC K+M < the number of OSD nodes.
>
>
>
>
> Hi,
>
> Yes, I have separate NVME namespaces allocated for WAL and DB to each
> spinning disk
>
>
> Namespaces, or partitions?
>
>
> Does that mean I still have to hunt for the 8TB culprit ?
>
>
> Okay so `ceph cf` shows 8.2 TB of raw space used on the hdd_class OSDs,
> that’s your concern, right?
>
> Please share outputs of the following:
>
> `ceph osd df` (showing a few of each device class)
> `ceph osd dump | grep pool`
> `ceph osd metadata NNNN | egrep /dev\|bluefs_\|bluestore_bdev` for at
> least one OSD of each device class. And run it yourself without specifying
> an OSD ID so it captures all, and see if all OSDs in each device class look
> the same.
> `ceph osd device ls-by-host ceph-host-1
>
> It’s entirely possible that your WAL+DB aren’t actually offloaded to SSDs
> as you intended. Advanced OSD service specs can be tricky.
>
> That’s my suspicion, that the WAL+DB are actually still on your HDDs.
> Which can be migrated in-situ, or you can nuke the site from orbit and
> redeploy.
>
> A note about your OSD specs. Specifying the models as you’re doing is
> totally supported. But think about what happens if you add nodes in the
> future that have different drive SKUs, or you RMA a drive and they send you
> a different SKU as the replacement.
>
> It’s usually more future-proof to use a size range in the spec for each
> osd service instead of `model`, with a bit of margin to account for base 2
> units vs base 10 units.
>
> Here’s an example that creates OSDs on SSDs between 490 and 1200 GB, this
> is on systems that have ~ 1TB nominal drives. The systems also have 2TB
> SATA SSDs that are used for WAL+DB offload, which are above the 1200GB
> limit specified so they aren’t matched.
>
> service_type: osd
> service_id: dashboard-admin-1705602677615
> service_name: osd.dashboard-admin-1705602677615
> placement:
> host_pattern: *
> spec:
> data_devices:
> rotational: 0
> size: 490G:1200G
> filter_logic: AND
> objectstore: bluestore
>
> And here is a spec that matches any HDD larger than 18T and deploys OSDs
> on them without offload. This cluster has 20TB HDDs, so the range of 18+
> TB matches both the SEAGATE_ST20000NM007H and SEAGATE_ST20000NM002D drives
> present.
>
> service_type: osd
> service_id: cost_capacity
> service_name: osd.cost_capacity
> placement:
> host_pattern: noactuallyusedanymore
> spec:
> data_devices:
> rotational: 1
> size: '18T:'
> filter_logic: AND
> objectstore: bluestore
>
> Oh, and make sure that your HDDs and SSDs are all updated to the most
> recent firmware. If you have Dell chassis, run DSU on the nodes and update
> all firmware, but skip the OS drivers. If you have HP chassis, you can get
> firmware update scripts from their web site, but I suspect these aren’t
> HP. If anyone else, they’re likely generic drives and you can get firmware
> updaters from the mfgs respective web sites.
>
> Then reboot nodes one at a time to effect the firmware, letting the
> cluster completely recover between each reboot.
>
>
>
>
> If yes , what would be the most efficient way of finding out what takes
> the space ?
>
> Apologies for sending pictures but we are operating in an air gapped
> environment
>
> I used this spec file to create the OSDs
>
> <image.png>
>
> Here is the osd tree of one of the servers
> all the other 6 are similar
>
> <image.png>
>
> Steven
>
>
> On Sun, 29 Jun 2025 at 14:25, Anthony D'Atri <[email protected]> wrote:
>
>> WAL by default rides along with the DB and rarely warrants a separate or
>> larger allocation.
>>
>> Since you say you’ve allocated DB space, does that mean that you have
>> WAL+DB offloaded onto SSDs? If so they don’t contribute to the space used
>> on the hdd device class.
>>
>>
>> > On Jun 29, 2025, at 1:56 PM, Steven Vacaroaia <[email protected]> wrote:
>> >
>> > Hi Janne
>> >
>> > Thanks
>> > That make sense since I have allocated 196GB for DB and 5 GB for WALL
>> for
>> > all 42 spinning OSDs
>> > Again, thanks
>> > Steveb
>> >
>> > On Sun, 29 Jun 2025 at 12:02, Janne Johansson <[email protected]>
>> wrote:
>> >
>> >> Den sön 29 juni 2025 kl 17:22 skrev Steven Vacaroaia <[email protected]
>> >:
>> >>
>> >>> Hi,
>> >>>
>> >>> I just built a new CEPH squid cluster with 7 nodes
>> >>> Since this is brand new, there is no actuall data on it except few
>> test
>> >>> files in the S3 data.bucket
>> >>>
>> >>> Why is "ceph -s" reporting 8 TB of used capacity ?
>> >>>
>> >>
>> >> Because each OSD will have GBs of preallocated data for the RocksDB,
>> >> write-ahead-logs and other structures, and this counts against "raw
>> >> available space", even if you don't have objects of this size put into
>> the
>> >> pools, the creation of the DBs and other things happened at osd
>> creation,
>> >> or when the first object was made, and are there even if you delete the
>> >> object later.
>> >>
>> >> --
>> >> May the most significant bit of your life be positive.
>> >>
>> > _______________________________________________
>> > ceph-users mailing list -- [email protected]
>> > To unsubscribe send an email to [email protected]
>>
>>
>
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META
AVAIL %USE VAR PGS STATUS
153 hdd_class 18.38179 1.00000 18 TiB 266 GiB 70 GiB 1 KiB 1.3
GiB 18 TiB 1.41 2.14 151 up
160 hdd_class 18.38179 1.00000 18 TiB 262 GiB 66 GiB 1 KiB 1.2
GiB 18 TiB 1.39 2.11 143 up
167 hdd_class 18.38179 1.00000 18 TiB 262 GiB 66 GiB 1 KiB 1.2
GiB 18 TiB 1.39 2.12 143 up
172 hdd_class 18.38179 1.00000 18 TiB 264 GiB 68 GiB 1 KiB 1.2
GiB 18 TiB 1.40 2.13 146 up
180 hdd_class 18.38179 1.00000 18 TiB 263 GiB 67 GiB 1 KiB 1.2
GiB 18 TiB 1.40 2.12 145 up
186 hdd_class 18.38179 1.00000 18 TiB 265 GiB 69 GiB 1 KiB 1.2
GiB 18 TiB 1.41 2.14 150 up
6 nvme_class 4.54749 1.00000 4.5 TiB 192 MiB 69 MiB 1 KiB 123
MiB 4.5 TiB 0.00 0.01 5 up
13 nvme_class 4.54749 1.00000 4.5 TiB 113 MiB 69 MiB 1 KiB 44
MiB 4.5 TiB 0.00 0.00 10 up
20 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 5 up
26 nvme_class 4.54749 1.00000 4.5 TiB 192 MiB 69 MiB 1 KiB 123
MiB 4.5 TiB 0.00 0.01 7 up
32 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 8 up
40 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 5 up
48 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 6 up
55 nvme_class 4.54749 1.00000 4.5 TiB 125 MiB 69 MiB 1 KiB 56
MiB 4.5 TiB 0.00 0.00 5 up
62 nvme_class 4.54749 1.00000 4.5 TiB 941 MiB 69 MiB 4 KiB 872
MiB 4.5 TiB 0.02 0.03 8 up
67 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
74 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
81 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
86 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
93 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
100 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
107 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
113 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
120 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
127 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
134 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
142 ssd_class 6.98630 1.00000 7.0 TiB 95 MiB 69 MiB 1 KiB 26
MiB 7.0 TiB 0.00 0.00 0 up
147 hdd_class 18.38179 1.00000 18 TiB 263 GiB 67 GiB 1 KiB 1.2
GiB 18 TiB 1.40 2.12 145 up
158 hdd_class 18.38179 1.00000 18 TiB 263 GiB 67 GiB 1 KiB 1.2
GiB 18 TiB 1.39 2.12 145 up
166 hdd_class 18.38179 1.00000 18 TiB 265 GiB 69 GiB 1 KiB 1.2
GiB 18 TiB 1.41 2.14 150 up
171 hdd_class 18.38179 1.00000 18 TiB 263 GiB 67 GiB 1 KiB 1.2
GiB 18 TiB 1.40 2.12 144 up
179 hdd_class 18.38179 1.00000 18 TiB 265 GiB 69 GiB 1 KiB 1.2
GiB 18 TiB 1.41 2.14 150 up
187 hdd_class 18.38179 1.00000 18 TiB 264 GiB 68 GiB 1 KiB 1.2
GiB 18 TiB 1.40 2.13 147 up
5 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 7 up
10 nvme_class 4.54749 1.00000 4.5 TiB 258 MiB 69 MiB 1 KiB 189
MiB 4.5 TiB 0.01 0.01 5 up
15 nvme_class 4.54749 1.00000 4.5 TiB 196 MiB 69 MiB 1 KiB 127
MiB 4.5 TiB 0.00 0.01 9 up
22 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 5 up
28 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 11 up
35 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 7 up
42 nvme_class 4.54749 1.00000 4.5 TiB 188 MiB 69 MiB 1 KiB 118
MiB 4.5 TiB 0.00 0.01 4 up
50 nvme_class 4.54749 1.00000 4.5 TiB 941 MiB 69 MiB 4 KiB 872
MiB 4.5 TiB 0.02 0.03 8 up
57 nvme_class 4.54749 1.00000 4.5 TiB 188 MiB 69 MiB 1 KiB 118
MiB 4.5 TiB 0.00 0.01 2 up
66 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
71 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
80 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
85 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
92 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
98 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
105 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
112 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
119 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
126 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
133 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
140 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
149 hdd_class 18.38179 1.00000 18 TiB 262 GiB 66 GiB 1 KiB 1.2
GiB 18 TiB 1.39 2.12 144 up
156 hdd_class 18.38179 1.00000 18 TiB 262 GiB 66 GiB 1 KiB 1.2
GiB 18 TiB 1.39 2.12 144 up
162 hdd_class 18.38179 1.00000 18 TiB 262 GiB 66 GiB 1 KiB 1.2
GiB 18 TiB 1.39 2.11 143 up
168 hdd_class 18.38179 1.00000 18 TiB 265 GiB 69 GiB 1 KiB 1.2
GiB 18 TiB 1.41 2.14 150 up
175 hdd_class 18.38179 1.00000 18 TiB 266 GiB 70 GiB 1 KiB 1.3
GiB 18 TiB 1.41 2.14 150 up
182 hdd_class 18.38179 1.00000 18 TiB 262 GiB 66 GiB 1 KiB 1.2
GiB 18 TiB 1.39 2.12 143 up
0 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 4 up
11 nvme_class 4.54749 1.00000 4.5 TiB 125 MiB 69 MiB 1 KiB 56
MiB 4.5 TiB 0.00 0.00 6 up
16 nvme_class 4.54749 1.00000 4.5 TiB 108 MiB 69 MiB 1 KiB 39
MiB 4.5 TiB 0.00 0.00 8 up
27 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 4 up
33 nvme_class 4.54749 1.00000 4.5 TiB 395 MiB 69 MiB 1 KiB 326
MiB 4.5 TiB 0.01 0.01 8 up
38 nvme_class 4.54749 1.00000 4.5 TiB 654 MiB 69 MiB 1 KiB 585
MiB 4.5 TiB 0.01 0.02 6 up
46 nvme_class 4.54749 1.00000 4.5 TiB 297 MiB 69 MiB 1 KiB 228
MiB 4.5 TiB 0.01 0.01 9 up
52 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 7 up
58 nvme_class 4.54749 1.00000 4.5 TiB 297 MiB 69 MiB 1 KiB 228
MiB 4.5 TiB 0.01 0.01 5 up
68 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
76 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
83 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
88 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
95 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
102 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
109 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
116 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
123 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
128 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
135 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
141 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
148 hdd_class 18.38179 1.00000 18 TiB 262 GiB 66 GiB 1 KiB 1.5
GiB 18 TiB 1.39 2.11 143 up
155 hdd_class 18.38179 1.00000 18 TiB 263 GiB 67 GiB 1 KiB 1.3
GiB 18 TiB 1.40 2.12 146 up
164 hdd_class 18.38179 1.00000 18 TiB 262 GiB 66 GiB 1 KiB 1.2
GiB 18 TiB 1.39 2.12 143 up
170 hdd_class 18.38179 1.00000 18 TiB 263 GiB 67 GiB 1 KiB 1.2
GiB 18 TiB 1.40 2.12 145 up
177 hdd_class 18.38179 1.00000 18 TiB 264 GiB 68 GiB 1 KiB 1.2
GiB 18 TiB 1.40 2.13 146 up
183 hdd_class 18.38179 1.00000 18 TiB 263 GiB 67 GiB 1 KiB 1.2
GiB 18 TiB 1.40 2.12 144 up
1 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 6 up
7 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 2 up
17 nvme_class 4.54749 1.00000 4.5 TiB 720 MiB 69 MiB 1 KiB 651
MiB 4.5 TiB 0.02 0.02 11 up
23 nvme_class 4.54749 1.00000 4.5 TiB 487 MiB 69 MiB 1 KiB 418
MiB 4.5 TiB 0.01 0.02 7 up
34 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 7 up
41 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 7 up
47 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 3 up
53 nvme_class 4.54749 1.00000 4.5 TiB 188 MiB 69 MiB 1 KiB 118
MiB 4.5 TiB 0.00 0.01 7 up
59 nvme_class 4.54749 1.00000 4.5 TiB 258 MiB 69 MiB 1 KiB 189
MiB 4.5 TiB 0.01 0.01 5 up
63 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
72 ssd_class 6.98630 1.00000 7.0 TiB 156 MiB 69 MiB 1 KiB 87
MiB 7.0 TiB 0.00 0.00 0 up
78 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
89 ssd_class 6.98630 1.00000 7.0 TiB 112 MiB 69 MiB 1 KiB 43
MiB 7.0 TiB 0.00 0.00 0 up
97 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
104 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
111 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
118 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
125 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
132 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
139 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
146 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
151 hdd_class 18.38179 1.00000 18 TiB 262 GiB 66 GiB 1 KiB 1.2
GiB 18 TiB 1.39 2.11 142 up
159 hdd_class 18.38179 1.00000 18 TiB 262 GiB 66 GiB 1 KiB 1.2
GiB 18 TiB 1.39 2.11 142 up
163 hdd_class 18.38179 1.00000 18 TiB 265 GiB 69 GiB 1 KiB 1.2
GiB 18 TiB 1.41 2.14 150 up
173 hdd_class 18.38179 1.00000 18 TiB 265 GiB 69 GiB 1 KiB 1.3
GiB 18 TiB 1.41 2.14 150 up
178 hdd_class 18.38179 1.00000 18 TiB 265 GiB 69 GiB 1 KiB 1.3
GiB 18 TiB 1.41 2.14 150 up
185 hdd_class 18.38179 1.00000 18 TiB 264 GiB 68 GiB 1 KiB 1.3
GiB 18 TiB 1.40 2.13 147 up
4 nvme_class 4.54749 1.00000 4.5 TiB 611 MiB 69 MiB 1 KiB 542
MiB 4.5 TiB 0.01 0.02 9 up
12 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 5 up
18 nvme_class 4.54749 1.00000 4.5 TiB 192 MiB 69 MiB 1 KiB 123
MiB 4.5 TiB 0.00 0.01 9 up
24 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 3 up
29 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 5 up
39 nvme_class 4.54749 1.00000 4.5 TiB 395 MiB 69 MiB 1 KiB 326
MiB 4.5 TiB 0.01 0.01 5 up
45 nvme_class 4.54749 1.00000 4.5 TiB 262 MiB 69 MiB 1 KiB 193
MiB 4.5 TiB 0.01 0.01 6 up
54 nvme_class 4.54749 1.00000 4.5 TiB 95 MiB 69 MiB 1 KiB 26
MiB 4.5 TiB 0.00 0.00 8 up
61 nvme_class 4.54749 1.00000 4.5 TiB 399 MiB 69 MiB 1 KiB 330
MiB 4.5 TiB 0.01 0.01 9 up
64 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
73 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
79 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
87 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
94 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
101 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
108 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
114 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
121 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
129 ssd_class 6.98630 1.00000 7.0 TiB 268 MiB 110 MiB 1 KiB 158
MiB 7.0 TiB 0.00 0.01 1 up
136 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
143 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
150 hdd_class 18.38179 1.00000 18 TiB 264 GiB 68 GiB 1 KiB 1.2
GiB 18 TiB 1.40 2.13 146 up
154 hdd_class 18.38179 1.00000 18 TiB 266 GiB 70 GiB 1 KiB 1.3
GiB 18 TiB 1.41 2.14 151 up
161 hdd_class 18.38179 1.00000 18 TiB 263 GiB 67 GiB 1 KiB 1.2
GiB 18 TiB 1.40 2.12 145 up
169 hdd_class 18.38179 1.00000 18 TiB 263 GiB 67 GiB 1 KiB 1.1
GiB 18 TiB 1.40 2.13 146 up
176 hdd_class 18.38179 1.00000 18 TiB 266 GiB 70 GiB 1 KiB 1.2
GiB 18 TiB 1.41 2.14 151 up
184 hdd_class 18.38179 1.00000 18 TiB 265 GiB 69 GiB 1 KiB 1.3
GiB 18 TiB 1.41 2.14 150 up
2 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 6 up
8 nvme_class 4.54749 1.00000 4.5 TiB 188 MiB 69 MiB 1 KiB 118
MiB 4.5 TiB 0.00 0.01 9 up
19 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 4 up
25 nvme_class 4.54749 1.00000 4.5 TiB 192 MiB 69 MiB 1 KiB 123
MiB 4.5 TiB 0.00 0.01 5 up
30 nvme_class 4.54749 1.00000 4.5 TiB 192 MiB 69 MiB 1 KiB 123
MiB 4.5 TiB 0.00 0.01 8 up
36 nvme_class 4.54749 1.00000 4.5 TiB 293 MiB 69 MiB 1 KiB 224
MiB 4.5 TiB 0.01 0.01 7 up
44 nvme_class 4.54749 1.00000 4.5 TiB 941 MiB 69 MiB 4 KiB 872
MiB 4.5 TiB 0.02 0.03 6 up
49 nvme_class 4.54749 1.00000 4.5 TiB 188 MiB 69 MiB 1 KiB 118
MiB 4.5 TiB 0.00 0.01 6 up
56 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 2 up
65 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
70 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
77 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
84 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
91 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
99 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
106 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
115 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
122 ssd_class 6.98630 1.00000 7.0 TiB 116 MiB 69 MiB 1 KiB 47
MiB 7.0 TiB 0.00 0.00 0 up
130 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
137 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
144 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
152 hdd_class 18.38179 1.00000 18 TiB 262 GiB 66 GiB 1 KiB 1.2
GiB 18 TiB 1.39 2.12 144 up
157 hdd_class 18.38179 1.00000 18 TiB 266 GiB 70 GiB 1 KiB 1.3
GiB 18 TiB 1.41 2.14 151 up
165 hdd_class 18.38179 1.00000 18 TiB 266 GiB 70 GiB 1 KiB 1.3
GiB 18 TiB 1.41 2.14 151 up
174 hdd_class 18.38179 1.00000 18 TiB 262 GiB 66 GiB 1 KiB 1.2
GiB 18 TiB 1.39 2.11 143 up
181 hdd_class 18.38179 1.00000 18 TiB 262 GiB 66 GiB 1 KiB 1.2
GiB 18 TiB 1.39 2.11 143 up
188 hdd_class 18.38179 1.00000 18 TiB 263 GiB 67 GiB 1 KiB 1.2
GiB 18 TiB 1.40 2.12 143 up
3 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 10 up
9 nvme_class 4.54749 1.00000 4.5 TiB 188 MiB 69 MiB 1 KiB 118
MiB 4.5 TiB 0.00 0.01 6 up
14 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 2 up
21 nvme_class 4.54749 1.00000 4.5 TiB 108 MiB 69 MiB 1 KiB 39
MiB 4.5 TiB 0.00 0.00 6 up
31 nvme_class 4.54749 1.00000 4.5 TiB 209 MiB 69 MiB 1 KiB 140
MiB 4.5 TiB 0.00 0.01 5 up
37 nvme_class 4.54749 1.00000 4.5 TiB 188 MiB 69 MiB 1 KiB 118
MiB 4.5 TiB 0.00 0.01 10 up
43 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 5 up
51 nvme_class 4.54749 1.00000 4.5 TiB 104 MiB 69 MiB 1 KiB 34
MiB 4.5 TiB 0.00 0.00 3 up
60 nvme_class 4.54749 1.00000 4.5 TiB 337 MiB 69 MiB 1 KiB 268
MiB 4.5 TiB 0.01 0.01 11 up
69 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
75 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
82 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
90 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
96 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
103 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
110 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
117 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
124 ssd_class 6.98630 1.00000 7.0 TiB 268 MiB 110 MiB 1 KiB 158
MiB 7.0 TiB 0.00 0.01 1 up
131 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
138 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
145 ssd_class 6.98630 1.00000 7.0 TiB 108 MiB 69 MiB 1 KiB 39
MiB 7.0 TiB 0.00 0.00 0 up
TOTAL 1.6 PiB 11 TiB 2.8 TiB 303 KiB 65
GiB 1.6 PiB 0.66
*************************
ceph osd dump | grep pool >> debug.txt
************************
MIN/MAX VAR: 0.00/2.14 STDDEV: 0.68
pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins
pg_num 1 pgp_num 1 autoscale_mode on last_change 72 flags hashpspool
stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr read_balance_score
150.00
pool 26 '.rgw.root' replicated size 3 min_size 2 crush_rule 6 object_hash
rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 757 flags hashpspool
stripe_width 0 application rgw read_balance_score 60.00
pool 27 'default.rgw.log' replicated size 3 min_size 2 crush_rule 6 object_hash
rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 749 flags hashpspool
stripe_width 0 application rgw read_balance_score 60.00
pool 28 'default.rgw.control' replicated size 3 min_size 2 crush_rule 6
object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 768 flags
hashpspool stripe_width 0 application rgw read_balance_score 60.00
pool 29 'default.rgw.meta' replicated size 3 min_size 2 crush_rule 6
object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 755 flags
hashpspool stripe_width 0 pg_autoscale_bias 4 application rgw
read_balance_score 60.00
pool 30 'default.rgw.buckets.index' replicated size 3 min_size 2 crush_rule 6
object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode on last_change 1153
lfor 0/0/1146 flags hashpspool stripe_width 0 pg_autoscale_bias 4 application
rgw read_balance_score 2.95
pool 31 'default.rgw.buckets.non-ec' replicated size 3 min_size 2 crush_rule 6
object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 759 flags
hashpspool stripe_width 0 application rgw read_balance_score 60.00
pool 33 'default.rgw.buckets.data' erasure profile hdd-k4m2 size 6 min_size 5
crush_rule 9 object_hash rjenkins pg_num 1024 pgp_num 1024 autoscale_mode warn
last_change 1136 flags hashpspool,ec_overwrites max_bytes 329853488332800
stripe_width 16384 application rgw
*******************
osd.188 is a rotational drive
ceph osd metadata osd.188 >> debug.txt
******************
{
"id": 188,
"arch": "x86_64",
"back_addr": "[v2:10.0.0.7:7010/1214744388,v1:10.0.0.7:7011/1214744388]",
"back_iface": "",
"bluefs": "1",
"bluefs_db_access_mode": "blk",
"bluefs_db_block_size": "4096",
"bluefs_db_dev_node": "/dev/dm-38",
"bluefs_db_devices": "nvme0n1",
"bluefs_db_driver": "KernelDevice",
"bluefs_db_optimal_io_size": "4096",
"bluefs_db_partition_path": "/dev/dm-38",
"bluefs_db_rotational": "0",
"bluefs_db_size": "210449203200",
"bluefs_db_support_discard": "1",
"bluefs_db_type": "ssd",
"bluefs_dedicated_db": "1",
"bluefs_dedicated_wal": "1",
"bluefs_single_shared_device": "0",
"bluefs_wal_access_mode": "blk",
"bluefs_wal_block_size": "4096",
"bluefs_wal_dev_node": "/dev/dm-37",
"bluefs_wal_devices": "nvme0n10",
"bluefs_wal_driver": "KernelDevice",
"bluefs_wal_optimal_io_size": "4096",
"bluefs_wal_partition_path": "/dev/dm-37",
"bluefs_wal_rotational": "0",
"bluefs_wal_size": "5364514816",
"bluefs_wal_support_discard": "1",
"bluefs_wal_type": "ssd",
"bluestore_allocation_from_file": "1",
"bluestore_bdev_access_mode": "blk",
"bluestore_bdev_block_size": "4096",
"bluestore_bdev_dev_node": "/dev/dm-36",
"bluestore_bdev_devices": "sdf",
"bluestore_bdev_driver": "KernelDevice",
"bluestore_bdev_optimal_io_size": "0",
"bluestore_bdev_partition_path": "/dev/dm-36",
"bluestore_bdev_rotational": "1",
"bluestore_bdev_size": "20000584761344",
"bluestore_bdev_support_discard": "0",
"bluestore_bdev_type": "hdd",
"bluestore_min_alloc_size": "4096",
"ceph_release": "squid",
"ceph_version": "ceph version 19.2.2
(0eceb0defba60152a8182f7bd87d164b639885b8) squid (stable)",
"ceph_version_short": "19.2.2",
"ceph_version_when_created": "ceph version 19.2.2
(0eceb0defba60152a8182f7bd87d164b639885b8) squid (stable)",
"container_hostname": "ceph-host-7",
"container_image":
"quay.io/ceph/ceph@sha256:8214ebff6133ac27d20659038df6962dbf9d77da21c9438a296b2e2059a56af6",
"cpu": "INTEL(R) XEON(R) GOLD 6530",
"created_at": "2025-06-25T21:13:21.665651Z",
"default_device_class": "hdd",
"device_ids":
"nvme0n1=MTFDKCC1T6TGQ-1BK1DABYY_3624105411B7,nvme0n10=MTFDKCC1T6TGQ-1BK1DABYY_3624105411B7,sdf=ATA_ST20000NM007D-3D_ZVTH5VM0",
"device_paths":
"nvme0n1=/dev/disk/by-path/pci-0000:49:00.0-nvme-1,nvme0n10=/dev/disk/by-path/pci-0000:49:00.0-nvme-10,sdf=/dev/disk/by-path/pci-0000:38:00.0-sas-exp0x500304802bcd323f-phy10-lun-0",
"devices": "nvme0n1,nvme0n10,sdf",
"distro": "centos",
"distro_description": "CentOS Stream 9",
"distro_version": "9",
"front_addr":
"[v2:192.168.122.236:7008/1214744388,v1:192.168.122.236:7009/1214744388]",
"front_iface": "",
"hb_back_addr": "[v2:10.0.0.7:7014/1214744388,v1:10.0.0.7:7015/1214744388]",
"hb_front_addr":
"[v2:192.168.122.236:7012/1214744388,v1:192.168.122.236:7013/1214744388]",
"hostname": "ceph-host-7",
"journal_rotational": "0",
"kernel_description": "#65-Ubuntu SMP PREEMPT_DYNAMIC Mon May 19 17:15:03
UTC 2025",
"kernel_version": "6.8.0-62-generic",
"mem_swap_kb": "8388604",
"mem_total_kb": "1056481544",
"network_numa_unknown_ifaces": "back_iface,front_iface",
"objectstore_numa_nodes": "1",
"objectstore_numa_unknown_devices": "sdf",
"os": "Linux",
"osd_data": "/var/lib/ceph/osd/ceph-188",
"osd_objectstore": "bluestore",
"osdspec_affinity": "hdd_osds",
"rotational": "1"
}
**************
ceph device ls-by-host ceph-host-1 >> debug.txt
**************
DEVICE DEV
DAEMONS EXPECTED FAILURE
ATA_Micron_5400_MTFD_24534D3377EF sdh
osd.74
ATA_Micron_5400_MTFD_24534D33781A sdp
osd.127
ATA_Micron_5400_MTFD_24534D33783D sdj
osd.86
ATA_Micron_5400_MTFD_24534D33783F sdr
osd.142
ATA_Micron_5400_MTFD_24534D337850 sdi
osd.81
ATA_Micron_5400_MTFD_24534D33A815 sdk
osd.93
ATA_Micron_5400_MTFD_24534D33B3B8 sdq
osd.134
ATA_Micron_5400_MTFD_24534D33B3C7 sdm
osd.107
ATA_Micron_5400_MTFD_24534D33B3F5 sdg
osd.67
ATA_Micron_5400_MTFD_24534D33B413 sdo
osd.120
ATA_Micron_5400_MTFD_24534D33B44D sdl
osd.100
ATA_Micron_5400_MTFD_24534D33BB42 sdn
osd.113
ATA_ST20000NM007D-3D_ZVTGW8GC sdd
osd.172
ATA_ST20000NM007D-3D_ZVTH1FLZ sde
osd.180
ATA_ST20000NM007D-3D_ZVTH2TKN sda
osd.153
ATA_ST20000NM007D-3D_ZVTH37LY sdc
osd.167
ATA_ST20000NM007D-3D_ZVTH4KWL sdf
osd.186
ATA_ST20000NM007D-3D_ZVTH4YH6 sdb
osd.160
MTFDKCC15T3TGP-1BK1DABYY_2724104F6785 nvme0n1 nvme0n2 nvme0n3
osd.13 osd.20 osd.6
MTFDKCC15T3TGP-1BK1DABYY_3324104F16E2 nvme3n1 nvme3n2 nvme3n3
osd.26 osd.32 osd.40
MTFDKCC15T3TGP-1BK1DABYY_38241053E2E3 nvme4n1 nvme4n2 nvme4n3
osd.48 osd.55 osd.62
MTFDKCC1T6TGQ-1BK1DABYY_50241081D831 nvme1n1 nvme1n11 nvme1n12 nvme1n13
nvme1n14 nvme1n6 osd.153 osd.160 osd.167 osd.172 osd.180 osd.186
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]