Hi,
I'm trying to update my cluster (19.2.2 -> 19.2.3), mon and mgr upgrade goes
well but I had some issue with OSD :
Upgrade: unsafe to stop osd(s) at this time (165 PGs are or would become
offline)
Cluster is in health_ok
All pools are replica 3 and pgs active+clean
autoscaler is off following the ceph docs
Does ceph osd ok-to-stop lead to lost data ?
The only rules used in the cluster is replicated_rule :
root@ceph-monitor-1:/# ceph osd crush rule dump replicated_rule
{
"rule_id": 0,
"rule_name": "replicated_rule",
"type": 1,
"steps": [
{
"op": "take",
"item": -1,
"item_name": "inist"
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "host"
},
{
"op": "emit"
}
]
}
root@ceph-monitor-1:/# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT
PRI-AFF
-1 6.63879 root inist
-15 0.90970 host ceph-monitor-1
6 hdd 0.90970 osd.6 up 1.00000
1.00000
-21 5.72910 datacenter bat1
-20 2.72910 room room01
-19 2.72910 row left
-18 2.72910 rack 10
-3 0.90970 host ceph-node-1
2 hdd 0.90970 osd.2 up 1.00000
1.00000
-5 0.90970 host ceph-node-2
1 hdd 0.90970 osd.1 up 1.00000
1.00000
-9 0.90970 host ceph-node-3
5 hdd 0.90970 osd.5 up 1.00000
1.00000
-36 3.00000 room room03
-35 3.00000 row left06
-34 3.00000 rack 08
-7 1.00000 host ceph-node-4
0 hdd 1.00000 osd.0 up 1.00000
1.00000
-13 1.00000 host ceph-node-5
3 hdd 1.00000 osd.3 up 1.00000
1.00000
-11 1.00000 host ceph-node-6
4 hdd 1.00000 osd.4 up 1.00000
1.00000
Thanks !
Vivien
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]