Based on the report here, this affects only setups with custom services/systemd units. Also, the blk-availability/blkdeactivate has been in RHEL7 since 7.0 and this seems to be the only report we have received so far (therefore, I don't expect much users to be affected by this issue).
Also, I think it's less risk adding the extra dependency as already described here https://access.redhat.com/solutions/4154611 than splitting the blk-availability / blkdeactivate into (at least) two parts running at different times. Also, if we did this, we'd need to introduce a new synchronization point (like a systemd target) that other services would need to depend on (and so it would require much more changes in various other components which involves risks). In future, we'll try to cover this shutdown scenario in a more proper way with new Storage Instantiation Daemon (SID). -- You received this bug notification because you are a member of Ubuntu Touch seeded packages, which is subscribed to lvm2 in Ubuntu. https://bugs.launchpad.net/bugs/1832859 Title: during shutdown libvirt-guests gets stopped after file system unmount Status in lvm2: New Status in libvirt package in Ubuntu: Incomplete Status in lvm2 package in Ubuntu: New Status in lvm2 package in Fedora: In Progress Bug description: When using automatic suspend at reboot/shutdown, it makes sense to store the suspend data on a separate partition to ensure there is always enough available space. However this does not work, as the partition gets unmounted before or during libvirt suspend. Steps to reproduce: 1. Use Ubuntu 18.04.02 LTS 2. Install libvirt + qemu-kvm 3. Start a guest 4. Set libvirt-guests to suspend at shutdown/reboot by editing /etc/default/libvirt-guests 5. Create a fstab entry to mount a separate partition to mount point /var/lib/libvirt/qemu/save. Then run sudo mount /var/lib/libvirt/qemu/save to mount the partition. 6. Reboot Expected result: The guest suspend data would be written to the /var/lib/libvirt/qemu/save, resulting in the data being stored at the partition specified in fstab. At boot, this partition would be mounted as specified in fstab and libvirt-guest would be able to read the data and restore the guests. Actual result: The partition gets unmounted before libvirt-guests suspends the guests, resulting in the data being stored on the partition containing the root file system. During boot, the empty partition gets mounted over the non-empty /var/lib/libvirt/qemu/save directory, resulting in libvirt-guests being unable to read the saved data. As a side effect, the saved data is using up space on the root partition even if the directory appears empty. Here is some of the relevant lines from the journal: Jun 14 00:00:04 libvirt-host blkdeactivate[4343]: Deactivating block devices: Jun 14 00:00:04 libvirt-host systemd[1]: Unmounted /var/lib/libvirt/qemu/save. Jun 14 00:00:04 libvirt-host blkdeactivate[4343]: [UMOUNT]: unmounting libvirt_lvm-suspenddata (dm-3) mounted on /var/lib/libvirt/qemu/save... done Jun 14 00:00:04 libvirt-host libvirt-guests.sh[4349]: Running guests on default URI: vps1, vps2, vps3 Jun 14 00:00:04 libvirt-host blkdeactivate[4343]: [MD]: deactivating raid1 device md1... done Jun 14 00:00:05 libvirt-host libvirt-guests.sh[4349]: Suspending guests on default URI... Jun 14 00:00:05 libvirt-host libvirt-guests.sh[4349]: Suspending vps1: ... Jun 14 00:00:05 libvirt-host blkdeactivate[4343]: [LVM]: deactivating Volume Group libvirt_lvm... skipping Jun 14 00:00:10 libvirt-host libvirt-guests.sh[4349]: Suspending vps1: 5.989 GiB Jun 14 00:00:15 libvirt-host libvirt-guests.sh[4349]: Suspending vps1: ... Jun 14 00:00:20 libvirt-host libvirt-guests.sh[4349]: Suspending vps1: ... To manage notifications about this bug go to: https://bugs.launchpad.net/lvm2/+bug/1832859/+subscriptions -- Mailing list: https://launchpad.net/~touch-packages Post to : touch-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~touch-packages More help : https://help.launchpad.net/ListHelp