I've tried to reproduce the problem on a VM (that uses ZFS as rootfs)
setting up a single-node ceph cluster, but OSD is coming up correctly:

$ sudo ceph -s | grep osd
    osd: 1 osds: 1 up (since 50m), 1 in (since 59m)

Could you provide more details about your particular ceph configuration
/ infrastructure, so that I can try to reproduce the problem in an
environment more similar to yours? Thanks.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1881747

Title:
  cephadm does not work with zfs root

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  When trying to install ceph on ubuntu 20.04 with zfs as root file
  system the OSD's do not come up.

  The OSD's give an error of:

  May 29 16:51:11 ip-10-0-0-148 systemd[1]: 
ceph-a3ed1cb2-a1cb-11ea-8daf-a729fb450032@osd.0.service: Main process exited, 
code=exited, status=1/FAILURE
  May 29 16:51:12 ip-10-0-0-148 systemd[1]: 
ceph-a3ed1cb2-a1cb-11ea-8daf-a729fb450032@osd.0.service: Failed with result 
'exit-code'.
  May 29 16:51:22 ip-10-0-0-148 systemd[1]: 
ceph-a3ed1cb2-a1cb-11ea-8daf-a729fb450032@osd.0.service: Scheduled restart job, 
restart counter is at 4.
  May 29 16:51:22 ip-10-0-0-148 systemd[1]: Stopped Ceph osd.0 for 
a3ed1cb2-a1cb-11ea-8daf-a729fb450032.
  May 29 16:51:22 ip-10-0-0-148 systemd[1]: Starting Ceph osd.0 for 
a3ed1cb2-a1cb-11ea-8daf-a729fb450032...
  May 29 16:51:22 ip-10-0-0-148 docker[114525]: Error: No such container: 
ceph-a3ed1cb2-a1cb-11ea-8daf-a729fb450032-osd.0
  May 29 16:51:22 ip-10-0-0-148 systemd[1]: Started Ceph osd.0 for 
a3ed1cb2-a1cb-11ea-8daf-a729fb450032.
  May 29 16:51:23 ip-10-0-0-148 bash[114543]: Running command: /usr/bin/mount 
-t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
  May 29 16:51:23 ip-10-0-0-148 bash[114543]: Running command: /usr/bin/chown 
-R ceph:ceph /var/lib/ceph/osd/ceph-0
  May 29 16:51:23 ip-10-0-0-148 bash[114543]: Running command: 
/usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev 
/dev/ceph-b3cf0dc5-a5fb-45c5-af3c-b85ef0b115ee/osd-block-3bfa4417-18e5-49f9->
  May 29 16:51:23 ip-10-0-0-148 bash[114543]: Running command: /usr/bin/ln -snf 
/dev/ceph-b3cf0dc5-a5fb-45c5-af3c-b85ef0b115ee/osd-block-3bfa4417-18e5-49f9-95ee-4c5912f0fa22
 /var/lib/ceph/osd/ceph-0/block
  May 29 16:51:23 ip-10-0-0-148 bash[114543]: Running command: /usr/bin/chown 
-h ceph:ceph /var/lib/ceph/osd/ceph-0/block
  May 29 16:51:23 ip-10-0-0-148 bash[114543]: Running command: /usr/bin/chown 
-R ceph:ceph 
/dev/mapper/ceph--b3cf0dc5--a5fb--45c5--af3c--b85ef0b115ee-osd--block--3bfa4417--18e5--49f9--95ee--4c5912f0fa22
  May 29 16:51:23 ip-10-0-0-148 bash[114543]: Running command: /usr/bin/chown 
-R ceph:ceph /var/lib/ceph/osd/ceph-0
  May 29 16:51:23 ip-10-0-0-148 bash[114543]: --> ceph-volume lvm activate 
successful for osd ID: 0
  May 29 16:51:24 ip-10-0-0-148 bash[115166]: debug 
2020-05-29T16:51:24.602+0000 7f05cfb9cec0  0 set uid:gid to 167:167 (ceph:ceph)
  May 29 16:51:24 ip-10-0-0-148 bash[115166]: debug 
2020-05-29T16:51:24.602+0000 7f05cfb9cec0  0 ceph version 15.2.2 
(0c857e985a29d90501a285f242ea9c008df49eb8) octopus (stable), process ceph-osd, 
pid 1
  May 29 16:51:24 ip-10-0-0-148 bash[115166]: debug 
2020-05-29T16:51:24.602+0000 7f05cfb9cec0  0 pidfile_write: ignore empty 
--pid-file
  May 29 16:51:24 ip-10-0-0-148 bash[115166]: debug 
2020-05-29T16:51:24.602+0000 7f05cfb9cec0 -1 missing 'type' file and unable to 
infer osd type

  Using ubuntu 20.04 without root zfs works fine.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1881747/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to