Related pull request on ceph side:
https://github.com/ceph/ceph/pull/46043
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1881747
Title:
cephadm does not work with zfs root
To manage notifications a
** Also affects: zfs-linux (Arch Linux)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1881747
Title:
cephadm does not work with zfs root
To manage notifi
I think the reason that ZFS behaves differently is because of this...
/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/activate.py
from ceph_volume.util import system
# mount on tmpfs the osd directory
osd_path = '/var/lib/ceph/osd/%s-%s' % (conf.cluster, osd_id)
if not syste
Follow up - it does seem to be the tmpfs mount that activate creates
that causes the problem.
I manually started the activate container by running the podman command
from unit.run for the activate step, but just ran "bash -l" instead of
the actual activate command
Then I prevented the mount tmpfs
For what it's worth, I've now had the exact same problem, which led me
here.
On a bare-metal 20.04 using full blank HDDs as OSDs (/dev/sda etc.),
installing using cephadm worked fine with an XFS root, but later on when
I reinstalled and tried ZFS root, I then got the same behaviour
described above
We tried docker by itself then tried ceph ansible by itself to deploy.
https://docs.ceph.com/ceph-ansible/master/
for ceph ansible we used version 5
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/18817
https://computingforgeeks.com/install-ceph-storage-cluster-on-ubuntu-
linux-servers/
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1881747
Title:
cephadm does not work with zfs root
To manage notif
I was pretty much following this simple tutorial:
http://prashplus.blogspot.com/2018/01/ceph-single-node-setup-ubuntu.html
I'll try to add docker and ceph-ansible to the equation and see if I can
reproduce it.
** Changed in: zfs-linux (Ubuntu)
Status: New => In Progress
--
You received t
BTW, how did you install ceph-ansible? I can't find a 20.04 package in
the ansible ppa.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1881747
Title:
cephadm does not work with zfs root
To manage no
We are using the latest Ubuntu 20.04 and we ahve tried ceph ansible +
docker deploy and both of those give us issues with zfs root fs. How are
you deploying?
If you give me your list of commands + image I can retry.
--
You received this bug notification because you are a member of Ubuntu
Bugs, w
I've tried to reproduce the problem on a VM (that uses ZFS as rootfs)
setting up a single-node ceph cluster, but OSD is coming up correctly:
$ sudo ceph -s | grep osd
osd: 1 osds: 1 up (since 50m), 1 in (since 59m)
Could you provide more details about your particular ceph configuration
/ infr
** Changed in: zfs-linux (Ubuntu)
Assignee: (unassigned) => Andrea Righi (arighi)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1881747
Title:
cephadm does not work with zfs root
To manage not
12 matches
Mail list logo