On 1/19/22 02:49, Norbert Veber wrote:
Thanks for checking! I was using the official Ceph packages from
stable, and have a cluster with a Ceph filesystem (data and metadata
pool), 3 mon and mgr nodes. Manual install.. nothing too crazy.
I compiled the sources, and installed manually with 'dpkg -i'.
Everything restarted fine except the ceph-mgr.
# ceph status
cluster:
id: c792f202-9f83-4f7c-ae08-243ef1afe1d8
health: HEALTH_WARN
no active mgr
services:
mon: 3 daemons, quorum pyre,llama,epsilon (age 34h)
mgr: no daemons active (since 10h)
mds: cephfs-ssd:1 {0=llama=up:active} 2 up:standby
osd: 4 osds: 4 up (since 34h), 4 in (since 3d)
data:
pools: 2 pools, 64 pgs
objects: 50.27k objects, 54 GiB
usage: 118 GiB used, 8.1 TiB / 8.2 TiB avail
pgs: 64 active+clean
Only changed a few config options:
pyre:~# ceph config dump
WHO MASK LEVEL OPTION VALUE RO
global advanced osd_pool_default_size 2
mon advanced auth_allow_insecure_global_id_reclaim false
mgr advanced mgr/dashboard/server_port 8083 *
mgr advanced mgr/dashboard/ssl false *
osd advanced bdev_async_discard true
osd advanced bdev_enable_discard true
Sounds like if all else fails I could blow away this cluster and make a
new one with the new version from scratch. Mostly wanted to upgrade so
I could have multiple filesystems (SSD and HDD).
I kind of want to make sure there's an upgrade path, so this is
important to address. I'll try myself soonish, maybe after Ceph Pacific
got accepted in bullseye-backports.
Cheers,
Thomas Goirand (zigo)