The issue is easily reproducible on my side:

$ uname -r
6.8.0-62-generic

$ sudo snap install microceph
2025-06-27T09:12:27Z INFO Waiting for automatic snapd restart...
microceph (squid/stable) 19.2.0+snapab139d4a1f from Canonical✓ installed

$ sudo microceph cluster bootstrap
$ sudo microceph.ceph osd crush rule rm replicated_rule
$ sudo microceph.ceph osd crush rule create-replicated single default osd

$ lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
loop0    7:0    0  50.9M  1 loop /snap/snapd/24718
loop1    7:1    0  66.8M  1 loop /snap/core24/1006
loop2    7:2    0 111.8M  1 loop /snap/microceph/1393
sda      8:0    0  93.1G  0 disk 
├─sda1   8:1    0     1G  0 part /boot/efi
└─sda2   8:2    0  92.1G  0 part /
sdb      8:16   0    20G  0 disk 

$ sudo microceph disk add /dev/sdb --wipe

+----------+---------+
|   PATH   | STATUS  |
+----------+---------+
| /dev/sdb | Success |
+----------+---------+

$ sudo microceph.ceph config set global osd_pool_default_size 1
$ sudo microceph.ceph osd pool create cephfs_metadata 8
pool 'cephfs_metadata' created
$ sudo microceph.ceph osd pool create cephfs_data 8
pool 'cephfs_data' created

$ sudo microceph.ceph fs new cephfs cephfs_metadata cephfs_data
  Pool 'cephfs_data' (id '3') has pg autoscale mode 'on' but is not marked as 
bulk.
  Consider setting the flag by running
    # ceph osd pool set cephfs_data bulk true
new fs with metadata pool 1 and data pool 3

$ sudo microceph.ceph auth get-or-create client.admin mon 'allow *' mds 'allow 
*' osd 'allow *' mgr 'allow *'
[client.admin]
        key = XXXYYYZZZ

$ sudo mkdir -p /mnt/cephfs
$ sudo mount -t ceph $(hostname -I | awk '{print $1}'):6789:/ /mnt/cephfs -o 
name=admin,secret=XXXYYYZZZ
$ mount
...
192.168.1.20:6789:/ on /mnt/cephfs type ceph 
(rw,relatime,name=admin,secret=<hidden>,acl)

$ cd /mnt/cephfs/
$ sudo touch before.txt
$ ll
total 4
drwxr-xr-x 2 root root    1 Jun 27 09:18 ./
drwxr-xr-x 3 root root 4096 Jun 27 09:14 ../
-rw-r--r-- 1 root root    0 Jun 27 09:18 before.txt
$ sudo dmesg | tail
...
[  152.158221] netfs: FS-Cache loaded
[  152.186537] ceph: loaded (mds proto 32)
[  152.189962] libceph: mon0 (1)192.168.1.20:6789 session established
[  152.190529] libceph: client14165 fsid 44252ee5-5d40-4334-8290-df906d1a0655

$ sudo apt install selinux-basics selinux-policy-default
$ sudo selinux-activate
$ sudo reboot

$ sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             default
Current mode:                   permissive
Mode from config file:          permissive
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Memory protection checking:     actual (secure)
Max kernel policy version:      33

$ sudo mount -t ceph $(hostname -I | awk '{print $1}'):6789:/ /mnt/cephfs -o 
name=admin,secret=XXXYYYZZZ
$ cd /mnt/cephfs/
$ sudo touch after.txt
Killed

** Changed in: linux (Ubuntu)
       Status: Incomplete => Confirmed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2115447

Title:
  Ubuntu 24.04.2: NULL pointer dereference with Ceph and selinux

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2115447/+subscriptions


-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to