reopen 764918
thanks
Hi,
upstream OVMF co-maintainer here again. (From a personal email address this
time, rather than my Red Hat one; I'm on vacation.)
Please read the recently released OVMF whitepaper:
http://www.linux-kvm.org/page/OVMF
Specifically, please search it for occurrences of the string "OVMF_". It
should give you a clear picture. Please read it carefully.
Anyway, here's a short summary.
(1) The NvVars file on the ESP dates back to the time when we had no working
pflash emulation in qemu + KVM; also no driver for it in OVMF. NvVars is a
fake variable store that is only writeable before ExitBootServices().
If you update non-volatile UEFI variables in the guest before
ExitBootServices(), they are saved in NvVars immediately. If you update
them from the runtime OS, the changes are only saved in memory. If you
reboot the guest (inside the same qemu instance), then the memory changes
are flushed to NvVars. If you power off the VM after making changes to the
non-volatile variables from the runtime guest OS, then those changes will be
lost.
In short, NvVars is a kludge that used to be necessary for faking
non-volatile UEFI variables to some extent. It allowed UEFI guest OS
installers to work (because most of those installers reboot the system after
setting up Boot#### and BootOrder). But it is gravely inferior to a
flash-based varstore, both due to the lifecycle issues mentioned above, and
due to not supporting persistent authenticated variables at all (ie.
SecureBoot related stuff, PK, KEK, db/dbx).
(2) We *do* have flash chip emulation now, in all of KVM, QEMU, and OVMF.
Everyone needs to stop using -bios at once (search the whitepaper for it as
well), and start using '-drive if=pflash'. This is all explained in the
whitepaper. If you do this, then the UEFI runtime variable services will
work in OVMF as expected -- changes will be made permanent at guest OS
runtime too, and persistent Secure Boot related variables will be available
as well (assuming you build OVMF with -D SECURE_BOOT_ENABLE of course).
(3) Okay, we can now discuss the split files. As I said in an earlier
comment, OVMF.fd is a unified file that contains both a live varstore and a
firmware binary. This was the only build output file originally, but it is
unsuitable for managing several guests on the same host:
(3a) first, you can't share OVMF.fd between guests. Each of those will want
to store its own private set of variables in the varstore part. So you'd
have to copy the full file for each guest.
(3b) That would break central firmware updates though. Namely, same as with
SeaBIOS, you might want to update the firmware centrally on a host, by
upgrading the OVMF package, and then each VM should see that update at its
next boot. However, if you have copied OVMF.fd for each guest, due to (3a),
then the package update would have to replace the relevant (ie. firmware
executable) portion of each VM's copy. This cannot work, obviously.
So the solution was to introduce OVMF_CODE.fd and OVMF_VARS.fd as build
output files. The first file is mapped read-only and shared by all VMs on a
host system. The second file is *not* mapped at all -- it is a
*template*. When you create a new virtual machine with libvirt-based tools
(virt-manager or virt-install), then libvirt *copies* the varstore template
to a VM-specific file; the pattern for the target file is
"/var/lib/libvirt/qemu/nvram/guestname_VARS.fd".
If you use qemu directly instead, ie. without libvirt, then you are
responsible for this copying step yourself. (And you can place the copy
wherever you like.)
The end result is that an OVMF package update will update the OVMF_CODE.fd
and OVMF_VARS.fd files, the first of which will be "picked up" by each guest
at its next boot, whereas the updated template will affect (should there be
any changes in it at all) *brand new* VMs only. Preexistent VMs will see
the firmware binary update (ie. OVMF_CODE.fd), but they will keep their
private varstores intact.
Again, /usr/share does not store live varstores; only the *template* resides
there. The OVMF_VARS.fd file stored there should be owned by root:root, and
have file mode bits 0644. The live varstores will reside under
/var/lib/libvirt/qemu/nvram/, if managed by libvirt (which is the
recommended way), or wherever else the user chooses to store them, if he
or she uses qemu directly.
Summary:
- please peruse the OVMF whitepaper,
- "-bios" and NvVars are *strongly* deprecated, and we might even remove
NvVars in upstream OVMF going forward -- instead, please use two instances
of the "-drive if=pflash" option; the first for OVMF_CODE.fd, the second
for the VM-specific *copy* of OVMF_VARS.fd,
- all distributions shipping OVMF should provide OVMF_CODE.fd and
/OVMF_VARS.fd under usr/share, with the permissions usual for that
directory tree. In addition, the unified OVMF.fd should not be provided
at all, if possible; it only leads to confusion.
Thanks
Laszlo
--
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org