[Kernel-packages] [Bug 1718761] Re: It's not possible to use OverlayFS (mount -t overlay) to stack directories on a ZFS volume

2020-04-21 Thread Nick Niehoff
I ran into this exactly as smoser described using using overlay in a container that is backed by zfs via lxd. I believe this will become more prevalent as people start to use zfs root with Focal 20.04. -- You received this bug notification because you are a member of Kernel Packages, which is su

[Kernel-packages] [Bug 1871874] Re: lvremove occasionally fails on nodes with multiple volumes and curtin does not catch the failure

2020-04-10 Thread Nick Niehoff
Ryan, From the logs the concern is the device or resource busy from meesage: Running command ['lvremove', '--force', '--force', 'vgk/sdklv'] with allowed return codes [0] (capture=False) device-mapper: remove ioctl on (253:5) failed: Device or resource busy Logical volume "sdklv" successf

[Kernel-packages] [Bug 1871874] Re: lvremove occasionally fails on nodes with multiple volumes and curtin does not catch the failure

2020-04-10 Thread Nick Niehoff
** Attachment added: "curtin-install-cfg.yaml" https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1871874/+attachment/5351450/+files/curtin-install-cfg.yaml -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bu

[Kernel-packages] [Bug 1871874] Re: lvremove occasionally fails on nodes with multiple volumes and curtin does not catch the failure

2020-04-10 Thread Nick Niehoff
** Attachment added: "curtin-install.log" https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1871874/+attachment/5351448/+files/curtin-install.log -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchp

[Kernel-packages] [Bug 1871874] Re: lvremove occasionally fails on nodes with multiple volumes and curtin does not catch the failure

2020-04-10 Thread Nick Niehoff
Ryan, We believe this is a bug as we expect curtin to wipe the disks. In this case it's failing to wipe the disks and occasionally that causes issues with our automation deploying ceph on those disks. This may be more of an issue with LVM and a race condition trying to wipe all of the disks