Hi, I come back to this bug.
Le 07/12/2022 à 22:26, dann frazier a écrit :
Thanks for opening this new issue! As I mentioned in #1016359, I had no problems booting a VM w/ virtio-scsi in sid, so this seems like it may be config-specific. Can you provide me with a way to reproduce this - e.g. your libvirt XML?
I think I found additionnal informations. It seems the bug is not really in the ovmf software, but in the upgrade path. I retried today to upgrade ovmf to 2022.11-6. Then, my VM becomes unable to boot and the SCSI Disk does not appear anymore (only the SATA CDROM (empty) is visible) in the UEFI Boot Manager. See ovmf-2022-11-6.png I tried both with the Virtio SCSI contrôler and the lsilogic one. In both cases, the disk is not visible at all in the UEFI shell or UEFI Boot Manager. Downgrading ovmf to 2020.11-2+deb11u1 fixes these problems. The disk is then visible in the UEFI Boot Manager (see ovmf-2020.11-2+deb11u1.png) As you said that you succeeded to boot, I then tried to create a new VM (using the same file for the HD, so taking care to never boot both VM at the same time) with the new ovmf package. It works! More exactly, the HD is visible in the UEFI Boot Manager and the UEFI shell (as FS0). From the latter, typing: FS0: cd efi cd debian shimx64.efi allows me to boot my system. So, I come back to my first (old) VM and try to spot the differences. It comes to the fact that the old VM is using <loader readonly="yes" type="pflash">/usr/share/OVMF/OVMF_CODE.fd</loader> and the new is using <loader readonly="yes" type="pflash">/usr/share/OVMF/OVMF_CODE_4M.secboot.fd</loader> [probably other OVMF_CODE_4M.* should also work, not tested] And when creating new VM, virt-manager only proposes OVMF_CODE_4M.*, not OVMF_CODE.* (whereas still present in the ovmf package) In my old VM in virt-manager, I used the XML editor to change "OVMF_CODE.fd" into "OVMF_CODE_4M.secboot.fd" At this time, the VM does not boot anymore at all (no UEFI Bios screen, only a message saying that the video is not initialized yet). I went to /var/lib/libvirt/qemu/nvram and removed (renamed) the nvram VARS file. This time, it works, the (old) VM boot and the UEFI Shell shows my SCSI disk (and the nvram VARS file has been recreated, bigger). I booted manually from the UEFI shell, then I ran "grub-install efi", and, at next boot, my VM started correctly. $ sudo ls -l /var/lib/libvirt/qemu/nvram/ -rw------- 1 libvirt-qemu libvirt-qemu 540672 6 mars 15:26 debian11_VARS.fd -rw------- 1 libvirt-qemu libvirt-qemu 131072 20 sept. 2020 visio_VARS-2023-03-06.fd -rw------- 1 libvirt-qemu libvirt-qemu 540672 6 mars 15:29 visio_VARS.fd The first is the new VM (that I will destroy) The next one is a rename of VARS file of the old VM The last one is the one created when booting the old VM with the new ovmf 4M (four times bigger) So, an upgrade path seems possible, but it is not easy to find. If you have a bullseye VM booting with OVMF_CODE.fd on a SCSI disk, you have to: 1) change OVMF_CODE.fd into OVMF_CODE_4M.secboot.fd (or to another variant) in the XML 2) remove (rename?) the _VARS.fd file 3) manually boot the VM from the UEFI shell 4) re-install grub ("grub-install efi") I do not know if something can be done for this path to be easier, but, at the very least, it should be documented. Perhaps, it may also be possible to get an error message - if the VM boot on a SCSI disk with OVMF_CODE.* - if the OVMF_CODE_4M.* code finds a *_VARS.fd related to OVMF_CODE.* Regards, Vincent