On 18 January 2017 at 15:39, bugproxy <bugpr...@us.ibm.com> wrote:
> ------- Comment From ma...@de.ibm.com 2017-01-18 10:36 EDT-------
> (In reply to comment #21)
>> Installed lpar onto 154d dasd drive, using LVM automatic partitioning recipe.
>> After installation and reipl, I did the following:
>
>> $ sudo update-initramfs -u
>
> I understand that things work with this explicit call.

How does it work with e.g. dracut? One must regenerate the initramfs
to activate the additional drive on boot, no?
Does one required to call zipl (which calls into dracut to regenerate
initramfs....?!) or does one call dracut?
E.g. how does thing work without any explicit calls to regenerate the initramfs?

> However, users can easily miss this since it's not entirely obvious.
>

Regular ubuntu/debian users do know to call update-initramfs -u when
fiddling/changing rootfs devices (not the most trivial task to be
honest).
Same as fedora users know to call dracut. IMHO.

>> At the end of all of this I have triggered a reboot; whilst watching
>> Operating system messages. Reboot was completed successfully. Thus imho this
>> bug is invalid.
>
>> However, I do think that on Ubuntu systems "chzdev -e" should always trigger
>> "initramfs-update -u" irrespective of what has been activated. Such that
>> initramfs has as up-to-date udev rules as possible. Simply because it is
>> impossible to predict at chzdev activation time whether or not something
>> will be formated and added to become part of the rootfs backing devices or
>> not.
>
> Hm, good point. But then again, it would be suboptimal to have hundreds
> of disks (paths) (or vNICs or other device types) activated early in
> initramfs just because a few of the disks are actually required to mount
> the root-fs (and that's all an initramfs should do). Alas, I don't know
> how to solve it optimally.
>

Ubuntu uses chzdev, which generates udev rules, and our initramfs
hooks copy all of the udev rules into the initramfs as udev is running
in Ubuntu initramfs.
The boot continues as soon as rootfs is detected to be available. If
other devices are activated in parallel, it should be mostly harmless.
And the boot will not wait to activate all the things that udev rules
specify in the initramfs.
Thus it's kind of a zero sum game, eventually all chzdev->udev rules
specified devices will be activated, and it does not matter much if
some of them are activated from initramfs or post-initramfs.

> (In reply to comment #18)
>> I have now added zdev root update hook in zesty, such that chzdev will
>> call update-initramfs -u.
>> I will test the behaviour and will cherry-pick that as an SRU into
>> xenial & yakkety.
>
> I cannot quite follow why this bug is invalid.
> I thought your above quoted code changes actually fixed this bug by not 
> requiring the user to explicitly run  "update-initramfs -u" any more.

I added the hook in place such that chzdev can call it when it needs to.

But I'm arguing that the hooks that chzdev calls will never be
sufficient when one decides to move their rootfs, on any Linux.
Because at initial "chzdev -e" call time the device that is being
activated is not yet part of the rootfs stack, but will become after
one e.g. expands lvm / mdadm / btrfs / etc on to it. Does that make
sense?

> If the bug is now declared invalid as in "user error", then I'm puzzled about 
> the reasons for a code change.
>

a code change was mostly a red-herring (oh there is a hook integration
that is not currently used, let's use it if we can).
The root cause of the bug is a user error - not regenerating initramfs
after changing rootfs devices.

>From the bug log, it seems there have been no attempts made to make
sure initramfs is updated, after the devices for the rootfs are
changed.

As far as I understand the pattern should be: (1) chzdev -e
new-device; (2) mangle new-device into roofs; (3) call something to
regenerated initramfs

Where (3) should be "update-initramfs -u" or "chzdev" (again) since at
this point it should have enough information to realise that rootfs
stack has changed and would therefore call "update-initramfs -u" via
the newly integrated hook (the red-herring code change, on the newer
ubuntu, as I don't think I have SRUed this).

But in practice step (3) imho should be just "update-initramfs -u"
(debian/ubuntu) or "dracut" (fedora/suse/rhel/etc) which is in my
opinion trivial and discoverable enough.
-- 
Regards,

Dimitri.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1641078

Title:
  System cannot be booted up when root filesystem is on an LVM on two
  disks

Status in Ubuntu on IBM z Systems:
  Invalid
Status in linux package in Ubuntu:
  Invalid

Bug description:
  ---Problem Description---
  LVMed root file system acrossing multiple disks cannot be booted up 
    
  ---uname output---
  Linux ntc170 4.4.0-38-generic #57-Ubuntu SMP Tue Sep 6 15:47:15 UTC 2016 
s390x s390x s390x GNU/Linux
   
  ---Patches Installed---
  n/a
   
  Machine Type = z13 
   
  ---System Hang---
   cannot boot up the system after shutdown or reboot
   
  ---Debugger---
  A debugger is not configured
   
  ---Steps to Reproduce---
   Created root file system on an LVM and the LVM crosses two disks. After shut 
down or reboot the system, the system cannot be up. 
   
  Stack trace output:
   no
   
  Oops output:
   no
   
  System Dump Info:
    The system is not configured to capture a system dump.
   
  Device driver error code:
   Begin: Mounting root file system ... Begin: Running /scripts/local-top ...   
lvmetad is not active yet, using direct activation during sysinit 
    Couldn't find device with uuid 7PC3sg-i5Dc-iSqq-AvU1-XYv2-M90B-M0kO8V. 
   
  -Attach sysctl -a output output to the bug.

  More detailed installation description:

  The installation was on a FCP SCSI SAN volumes each with two active
  paths.  Multipath was involved.  The system IPLed fine up to the point
  that we expanded the /root filesystem to span volumes.  At boot time,
  the system was unable to locate the second segment of the /root
  filesystem.   The error message indicated this was due to lvmetad not
  being not active.

  Error message:   
         Begin: Running /scripts/local-block ...   lvmetad is not active yet, 
using direct activation during sysinit 
         Couldn't find device with uuid 7PC3sg-i5Dc-iSqq-AvU1-XYv2-M90B-M0kO8V 
          Failed to find logical volume "ub01-vg/root" 
          
  PV Volume information: 
  physical_volumes { 

                 pv0 { 
                         id = "L2qixM-SKkF-rQsp-ddao-gagl-LwKV-7Bw1Dz" 
                         device = "/dev/sdb5"        # Hint only 

                         status = ["ALLOCATABLE"] 
                         flags = [] 
                         dev_size = 208713728        # 99.5225 Gigabytes 
                         pe_start = 2048 
                         pe_count = 25477        # 99.5195 Gigabytes 
                 } 

                 pv1 { 
                         id = "7PC3sg-i5Dc-iSqq-AvU1-XYv2-M90B-M0kO8V" 
                         device = "/dev/sda"        # Hint only 

                         status = ["ALLOCATABLE"] 
                         flags = [] 
                         dev_size = 209715200        # 100 Gigabytes 
                         pe_start = 2048 
                         pe_count = 25599        # 99.9961 Gigabytes 

  
  LV Volume Information: 
  logical_volumes { 

                 root { 
                         id = "qWuZeJ-Libv-DrEs-9b1a-p0QF-2Fj0-qgGsL8" 
                         status = ["READ", "WRITE", "VISIBLE"] 
                         flags = [] 
                         creation_host = "ub01" 
                         creation_time = 1477515033        # 2016-10-26 
16:50:33 -0400 
                         segment_count = 2 

                         segment1 { 
                                 start_extent = 0 
                                 extent_count = 921        # 3.59766 Gigabytes 

                                 type = "striped" 
                                 stripe_count = 1        # linear 

                                 stripes = [ 
                                         "pv0", 0 
                                 ] 
                         } 
                         segment2 { 
                                 start_extent = 921 
                                 extent_count = 25344        # 99 Gigabytes 

                                 type = "striped" 
                                 stripe_count = 1        # linear 

                                 stripes = [ 
                                         "pv1", 0 
                                 ] 
                         } 
                 } 

  
  Additional testing has been done with CKD volumes and we see the same 
behavior.   Only the UUID of the fist volume in the VG can be located at boot, 
and the same message:  lvmetad is not active yet, using direct activation 
during sysinit 
  Couldn't find device with uuid xxxxxxxxxxxxxxxxx  is displayed for CKD disks. 
Just a different UUID is listed.   
  If the file /root file system only has one segment on the first volume,  CKD 
or SCSI  volumes, the system will IPL.  Because of this behavior, I do not 
believe the problem is related to SAN disk or multipath.   I think it is due to 
the system not being able to read the UUID on any PV in the VG other then the 
IPL disk.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu-z-systems/+bug/1641078/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to