For what it's worth, I've now had the exact same problem, which led me
here.
On a bare-metal 20.04 using full blank HDDs as OSDs (/dev/sda etc.),
installing using cephadm worked fine with an XFS root, but later on when
I reinstalled and tried ZFS root, I then got the same behaviour
described above
Follow up - it does seem to be the tmpfs mount that activate creates
that causes the problem.
I manually started the activate container by running the podman command
from unit.run for the activate step, but just ran "bash -l" instead of
the actual activate command
Then I prevented the mount tmpfs
I think the reason that ZFS behaves differently is because of this...
/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/activate.py
from ceph_volume.util import system
# mount on tmpfs the osd directory
osd_path = '/var/lib/ceph/osd/%s-%s' % (conf.cluster, osd_id)
if not syste
3 matches
Mail list logo