So /usr/lib/udev/rules.d/60-persistent-storage-dm.rules is responsible
for creating the  /dev/disk/by-id/dm-uuid-* symlinks:

root@plucky:~# grep -rn "disk/by-id/dm-uuid" /usr/lib/udev/rules.d/
/usr/lib/udev/rules.d/60-persistent-storage-dm.rules:18:ENV{DM_UUID}=="?*", 
SYMLINK+="disk/by-id/dm-uuid-$env{DM_UUID}"

That symlink is dependent on a non-empty `DM_UUID` udev environment
variable. That variable is set in /usr/lib/udev/rules.d/55-dm.rules:

root@plucky:~# grep -rn "ENV{DM_UUID}=[^=]" /usr/lib/udev/rules.d/
/usr/lib/udev/rules.d/55-dm.rules:133:TEST=="dm", 
ENV{DM_NAME}="$attr{dm/name}", ENV{DM_UUID}="$attr{dm/uuid}", 
ENV{.DM_SUSPENDED}="$attr{dm/suspended}"

So if that environment variable is not set, that's one reason why the
symlinks would not be created. A reason that variable is not set would
be that /sys/block/dm-N/dm (or /sys/block/dm-N/dm/uuid) does not exist.

Can you check for the existence of those things in /sys/, and maybe
paste the output of `udevadm info /sys/block/dm-N` for each N?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2102236

Title:
  plucky/s390x (with root disk on dm?) does not come up after kernel
  update + reboot

Status in Ubuntu on IBM z Systems:
  New
Status in linux package in Ubuntu:
  New
Status in lvm2 package in Ubuntu:
  New

Bug description:
  While using the plucky daily from March 12 (that still comes with kernel 6.12)
  (and working around LP#2101831, by forcing the installation to not apply any 
updates)
  I get a system installed, which is at kernel level 6.12.

  Since 6.14 is out (in plucky release) since yesterday (March 12th) I tried to 
upgrade from 6.12 to 6.14,
  and the update itself seemed to be smooth (I couldn't find any errors while 
doing a full-upgrade in the terminal - see attached logs).

  But after executing a reboot, the system (in 3 different
  configurations, that use dm) does not come up again, and ends up in
  busybox, complaining that the root device couldn't be found:

  # s1lp15 FCP/SCSI multipath with LVM
  ALERT!  
/dev/disk/by-id/dm-uuid-LVM-SlleSC5YA825VOM3t0KHBVFrLJcNWsnwZsObNziIB9Bk2mSVphnuTEOQ2eFiBbE1
 does not exist.  Dropping to a shell!

  # s1lp15 2DASDs with LVM:
  ALERT!  
/dev/disk/by-id/dm-uuid-LVM-ePTbsojYPfgMacKXwpIMNMvxk80qGzlPhRYw7DJlovmqHyla9TK6NGc70p1JN29b
 does not exist.  Dropping to a shell!

  # s1lp15 FCP/SCSI Multipath no LVM
  ALERT!  /dev/disk/by-id/dm-uuid-part1-mpath-36005076306ffd6b60000000000002603 
does not exist.  Dropping to a shell!

  However, using a single disk without dm (so: no multipath, no lvm) the
  system is able to come up again after the reboot (after a kernel
  upgrade).

  # s1lp15 single DASD no LVM
  here the root device is:
  root=/dev/disk/by-path/ccw-0.0.260b-part1
  and it exists.

  In the 3 different cases that (that repeatedly fail) "/dev/disk/by-id"
  is missing.

  I am not sure yet what's causing this,
  it can be an issue with the device-mapper/lvm2
  but also udev rules or kernel.
  So maybe I need to add the kernel as affected component too (for now).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/2102236/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to