I was able to reproduce this with a VM deployed by MAAS.  I created a VM
and added 26 disks to in using virsh (NOTE: I use zfs volumes for my
disks)

for i in {a..z}; do sudo zfs create -s -V 30G rpool/libvirt/maas-node-20$i; done
for i in {a..z}; do virsh attach-disk maas-node-20 
/dev/zvol/rpool/libvirt/maas-node-20$i sd$i --current --cache none --io native; 
done

Then in maas:

commission the machine to recognize all of the disks

machine_id=123abc
for i in {b..z}; do device_id=$(maas admin machine read $machine_id | jq 
".blockdevice_set[] | select(.name == \"sd$i\") | .id"); vgid=$(maas admin 
volume-groups create $machine_id name=vg$i block_devices=$device_id | jq 
'.id'); maas admin volume-group create-logical-volume $machine_id $vgid 
name=sd${i}lv size=32208060416; done

You may need to change the size in the previous command.  I then
deployed the system 2 times with Bionic, with xenial as the
commissioning OS.  The second time I saw the "failed: Device or resource
busy" errors.  I am using MAAS 2.7.

This reproduces easily with Xenial as the commissioning OS.
This does not reproduce using Xenial with the hwe kernel as the commissioning 
OS.
I can not reproduce this using Bionic as the commissioning OS.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1871874

Title:
  lvremove occasionally fails on nodes with multiple volumes and curtin
  does not catch the failure

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/curtin/+bug/1871874/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to