------- Comment From ma...@de.ibm.com 2017-01-18 12:02 EDT------- (In reply to comment #23) > > (In reply to comment #21) > >> Installed lpar onto 154d dasd drive, using LVM automatic partitioning > >> recipe. > >> After installation and reipl, I did the following: > > > >> $ sudo update-initramfs -u > > > > I understand that things work with this explicit call. > > How does it work with e.g. dracut? One must regenerate the initramfs > to activate the additional drive on boot, no?
yes. > Does one required to call zipl (which calls into dracut to regenerate > initramfs....?!) or does one call dracut? user sequence (in that order): dracut && zipl. "update-initramfs -u" includes the final zipl step which is nice. > > However, users can easily miss this since it's not entirely obvious. > Regular ubuntu/debian users do know to call update-initramfs -u when > fiddling/changing rootfs devices (not the most trivial task to be > honest). > Same as fedora users know to call dracut. IMHO. I guess, I've seen too many cases where the user missed this (or a related) step and therefore I was hoping for an automatic solution. > Ubuntu uses chzdev, which generates udev rules, and our initramfs > hooks copy all of the udev rules into the initramfs as udev is running > in Ubuntu initramfs. OK, I learned that Ubuntu does not do root-fs dependency tracking of z-specific device activation udev rules it includes into initramfs. IIRC, zdev might have such tracking if it was used with dracut instead of update-initramfs. Maybe that's why I was on a misleading track. > The boot continues as soon as rootfs is detected to be available. If > other devices are activated in parallel, it should be mostly harmless. > And the boot will not wait to activate all the things that udev rules > specify in the initramfs. > Thus it's kind of a zero sum game, eventually all chzdev->udev rules > specified devices will be activated, and it does not matter much if > some of them are activated from initramfs or post-initramfs. I've seen it matter, where large installations choked within initramfs because of processing udev rules (and in turn things such as multipath events) for all the other (unnecessary) devices. But that would be a different bug anyway. > I added the hook in place such that chzdev can call it when it needs to. > > But I'm arguing that the hooks that chzdev calls will never be > sufficient when one decides to move their rootfs, on any Linux. > Because at initial "chzdev -e" call time the device that is being > activated is not yet part of the rootfs stack, but will become after > one e.g. expands lvm / mdadm / btrfs / etc on to it. Does that make > sense? It does indeed make sense to me. > > If the bug is now declared invalid as in "user error", then I'm puzzled > > about the reasons for a code change. > > a code change was mostly a red-herring OK, got it. Thanks for making this clear. So the bug is invalid unless someone finds a solution for the problem that root-fs dependencies (can) change after any current chzdev hook. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1641078 Title: System cannot be booted up when root filesystem is on an LVM on two disks Status in Ubuntu on IBM z Systems: Invalid Status in linux package in Ubuntu: Invalid Bug description: ---Problem Description--- LVMed root file system acrossing multiple disks cannot be booted up ---uname output--- Linux ntc170 4.4.0-38-generic #57-Ubuntu SMP Tue Sep 6 15:47:15 UTC 2016 s390x s390x s390x GNU/Linux ---Patches Installed--- n/a Machine Type = z13 ---System Hang--- cannot boot up the system after shutdown or reboot ---Debugger--- A debugger is not configured ---Steps to Reproduce--- Created root file system on an LVM and the LVM crosses two disks. After shut down or reboot the system, the system cannot be up. Stack trace output: no Oops output: no System Dump Info: The system is not configured to capture a system dump. Device driver error code: Begin: Mounting root file system ... Begin: Running /scripts/local-top ... lvmetad is not active yet, using direct activation during sysinit Couldn't find device with uuid 7PC3sg-i5Dc-iSqq-AvU1-XYv2-M90B-M0kO8V. -Attach sysctl -a output output to the bug. More detailed installation description: The installation was on a FCP SCSI SAN volumes each with two active paths. Multipath was involved. The system IPLed fine up to the point that we expanded the /root filesystem to span volumes. At boot time, the system was unable to locate the second segment of the /root filesystem. The error message indicated this was due to lvmetad not being not active. Error message: Begin: Running /scripts/local-block ... lvmetad is not active yet, using direct activation during sysinit Couldn't find device with uuid 7PC3sg-i5Dc-iSqq-AvU1-XYv2-M90B-M0kO8V Failed to find logical volume "ub01-vg/root" PV Volume information: physical_volumes { pv0 { id = "L2qixM-SKkF-rQsp-ddao-gagl-LwKV-7Bw1Dz" device = "/dev/sdb5" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 208713728 # 99.5225 Gigabytes pe_start = 2048 pe_count = 25477 # 99.5195 Gigabytes } pv1 { id = "7PC3sg-i5Dc-iSqq-AvU1-XYv2-M90B-M0kO8V" device = "/dev/sda" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 209715200 # 100 Gigabytes pe_start = 2048 pe_count = 25599 # 99.9961 Gigabytes LV Volume Information: logical_volumes { root { id = "qWuZeJ-Libv-DrEs-9b1a-p0QF-2Fj0-qgGsL8" status = ["READ", "WRITE", "VISIBLE"] flags = [] creation_host = "ub01" creation_time = 1477515033 # 2016-10-26 16:50:33 -0400 segment_count = 2 segment1 { start_extent = 0 extent_count = 921 # 3.59766 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 0 ] } segment2 { start_extent = 921 extent_count = 25344 # 99 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv1", 0 ] } } Additional testing has been done with CKD volumes and we see the same behavior. Only the UUID of the fist volume in the VG can be located at boot, and the same message: lvmetad is not active yet, using direct activation during sysinit Couldn't find device with uuid xxxxxxxxxxxxxxxxx is displayed for CKD disks. Just a different UUID is listed. If the file /root file system only has one segment on the first volume, CKD or SCSI volumes, the system will IPL. Because of this behavior, I do not believe the problem is related to SAN disk or multipath. I think it is due to the system not being able to read the UUID on any PV in the VG other then the IPL disk. To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu-z-systems/+bug/1641078/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp