[Kernel-packages] [Bug 1881747] Re: cephadm does not work with zfs root

2020-10-05 Thread Martin Strange
For what it's worth, I've now had the exact same problem, which led me here. On a bare-metal 20.04 using full blank HDDs as OSDs (/dev/sda etc.), installing using cephadm worked fine with an XFS root, but later on when I reinstalled and tried ZFS root, I then got the same behaviour described above

[Kernel-packages] [Bug 1881747] Re: cephadm does not work with zfs root

2020-10-06 Thread Martin Strange
Follow up - it does seem to be the tmpfs mount that activate creates that causes the problem. I manually started the activate container by running the podman command from unit.run for the activate step, but just ran "bash -l" instead of the actual activate command Then I prevented the mount tmpfs

[Kernel-packages] [Bug 1881747] Re: cephadm does not work with zfs root

2020-10-06 Thread Martin Strange
I think the reason that ZFS behaves differently is because of this... /usr/lib/python3.6/site-packages/ceph_volume/devices/raw/activate.py from ceph_volume.util import system # mount on tmpfs the osd directory osd_path = '/var/lib/ceph/osd/%s-%s' % (conf.cluster, osd_id) if not syste