I asked the kernel team already to have a look at this as well, when I
marked it as affecting 'linux (Ubuntu)', it's suspicious.

And no, the disk nodes are missing in /dev.
It's a multipath environment with two HBAs, each two paths,
so there need to be four SCSI devices (sda, sdb, sdc and sdd),
but there are none (being in busybox):

ls -la /dev/sd*
ls: /dev/sd*: No such file or directory

ls -la /sys/block
lrwxrwxrwx    1         0 loop3 -> ../devices/virtual/block/loop3
lrwxrwxrwx    1         0 loop5 -> ../devices/virtual/block/loop5
lrwxrwxrwx    1         0 loop7 -> ../devices/virtual/block/loop7
lrwxrwxrwx    1         0 loop0 -> ../devices/virtual/block/loop0
lrwxrwxrwx    1         0 loop2 -> ../devices/virtual/block/loop2
lrwxrwxrwx    1         0 loop4 -> ../devices/virtual/block/loop4
lrwxrwxrwx    1         0 loop6 -> ../devices/virtual/block/loop6
lrwxrwxrwx    1         0 loop1 -> ../devices/virtual/block/loop1
dr-xr-xr-x   12         0 ..
drwxr-xr-x    2         0 .

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/2102236

Title:
  plucky/s390x (with root disk on dm?) does not come up after kernel
  update + reboot

Status in Ubuntu on IBM z Systems:
  New
Status in linux package in Ubuntu:
  New
Status in lvm2 package in Ubuntu:
  New

Bug description:
  While using the plucky daily from March 12 (that still comes with kernel 6.12)
  (and working around LP#2101831, by forcing the installation to not apply any 
updates)
  I get a system installed, which is at kernel level 6.12.

  Since 6.14 is out (in plucky release) since yesterday (March 12th) I tried to 
upgrade from 6.12 to 6.14,
  and the update itself seemed to be smooth (I couldn't find any errors while 
doing a full-upgrade in the terminal - see attached logs).

  But after executing a reboot, the system (in 3 different
  configurations, that use dm) does not come up again, and ends up in
  busybox, complaining that the root device couldn't be found:

  # s1lp15 FCP/SCSI multipath with LVM
  ALERT!  
/dev/disk/by-id/dm-uuid-LVM-SlleSC5YA825VOM3t0KHBVFrLJcNWsnwZsObNziIB9Bk2mSVphnuTEOQ2eFiBbE1
 does not exist.  Dropping to a shell!

  # s1lp15 2DASDs with LVM:
  ALERT!  
/dev/disk/by-id/dm-uuid-LVM-ePTbsojYPfgMacKXwpIMNMvxk80qGzlPhRYw7DJlovmqHyla9TK6NGc70p1JN29b
 does not exist.  Dropping to a shell!

  # s1lp15 FCP/SCSI Multipath no LVM
  ALERT!  /dev/disk/by-id/dm-uuid-part1-mpath-36005076306ffd6b60000000000002603 
does not exist.  Dropping to a shell!

  However, using a single disk without dm (so: no multipath, no lvm) the
  system is able to come up again after the reboot (after a kernel
  upgrade).

  # s1lp15 single DASD no LVM
  here the root device is:
  root=/dev/disk/by-path/ccw-0.0.260b-part1
  and it exists.

  In the 3 different cases that (that repeatedly fail) "/dev/disk/by-id"
  is missing.

  I am not sure yet what's causing this,
  it can be an issue with the device-mapper/lvm2
  but also udev rules or kernel.
  So maybe I need to add the kernel as affected component too (for now).

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/2102236/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to