In message: Re: [PATCH v4 1/1] crosvm: add recipe for ChromeOS Virtual Machine Monitor (VMM) on 23/02/2026 Keerthivasan Raghavan wrote:
> > We can drop the "meta-virtualization layer" from the description, > > we know the layer just by it being here :) > > Ack. > > > We can drop the marketing for crosvm from the commit. Users > > will make their own choice on what they want to use, so we > > don't need the elevator pitch. > > Ack. > > > Which unfortunately means I'd have no way of being able to > > build and test it myself. Getting this working under > > genericarm64 or better qemuarm64 machine definitions (at list > > minimal functionality is critical for being able to support > > it). > > Ack. I will try to run the arm64 guest image(generated from yocto using a > generic arm64 machine) > that exposes /dev/kvm on top of an arm64 host(my target with native arm64 > cpu). I am banking on > using nested kvm virtualization to get this to work(and hopefully it will) as > the guest image > containing the crosvm is the end goal to reproduce a consistent test > environment. > Should I add this test image into meta-virtualization provided it works out ? Possibly, yes. I'd like to do the same for the VMMs that I've done for containers and just define a VMM_PROFILE that is selected, and that configures everything we need. That would allow me to collapse all the specific host images into one (just like I did with container-image-host). But leave that part to me, as I'd be pretty specific about how I'd want it done. Define a minimal image for it, and then I can look for a pattern later and refactor/consolidate things. > > The bellow ascii diagram captures the intent: > > +==================================================================================================+ > | L0: ARM64 HOST (Generic Armv8 CPU) > +==================================================================================================+ > | Hardware > | - Armv8-A CPU > | - EL2 virtualization support > | - (must support nested virtualization for L1) > ---------------------------------------------------------------------------------------------------- > | Host Linux Kernel > | - KVM host support > | - /dev/kvm > ---------------------------------------------------------------------------------------------------- > | Userspace > | - qemu-system-aarch64 > | - accel: KVM > | - cpu: host > | - machine: virt,virtualization=on > ---------------------------------------------------------------------------------------------------- > | Runs the L1 VM > +-----------------------------------------------+--------------------------------------------------+ > | > v > +==================================================================================================+ > | L1: ARM64 GUEST (Yocto Test Appliance, generic Armv8) > +==================================================================================================+ > | Guest Kernel > | - AArch64 > | - KVM guest support enabled > | - /dev/kvm <-- present only if nested virtualization is supported by L0 > ---------------------------------------------------------------------------------------------------- > | Rootfs (Yocto-built) > | - crosvm > | - test orchestration / CI hooks > | > | - Packaged L2 artifacts > | /opt/l2/ > | ├── Image (L2 kernel, generic Armv8) > | ├── rootfs.ext4 (L2 root filesystem) > | ├── initrd.img (optional) > | ├── *.dtb (optional) > | └── cmdline.txt (optional) > +-----------------------------------------------+--------------------------------------------------+ > | > v > +==================================================================================================+ > | L2: NESTED GUEST VM (generic Armv8, test target) > +==================================================================================================+ > | Booted by crosvm using packaged artifacts > | Ideally KVM-accelerated via nested virtualization > ---------------------------------------------------------------------------------------------------- > | Test boot to shell of the VM. > +==================================================================================================+ > > NOTE: The above was generated using gen AI, due credits to M365. I'm about to push a complete re-working of the Xen guest image bundling to match the container image bundling. So for now, ignore/skip the need to provide / build the guest images and bundles in meta-virtualization. Since they'd need to follow a pattern that isn't quite visible yet (I'm just chasing a few regressions on container bundling). > > My pessimism originates dues to the levels of nested virtualization, but > given that > cloud providers like AWS have introduced nested virtualization for its > ec2/compute instances > (I came across a post on LinkedIn that announced this, will need to check the > offcial ec2 doc), > I think this is a good enough generic test-setup arch that one uses to test > userspace vmm solutions. > What remains is to prove this works as a stable test infra setup without > jitter of random failures due to > environment/machine drift, across the 2/3 layers of virtualization. It should work, or at least I've been using nested virtualization in the meta-virtualization context for over a decade to test Xen and the container stacks. There may need to be some additions to the host kernel, etc, to make it work, but that is something we can solve and key off the distro feature. It does need to use the OEcore and meta-virtualization components for the kernel, etc, otherwise, we have depdendencies and a test matrix that is getting larger. For things that absolutely need hardware, there are the dynamic layers, but I don't use those for core functionality testing. > > > I've started creating a pytest infrastructure for new packages > > (and eventually the old ones) in the layer. If we get a machine > > working that is more generic, we can add those. At a minimum > > a README along side the recipe with testing instructions should > > be done. > > Ack. Will get the README based on the above diagram that captures testing. > Can you please provide some references or pointers on the pytest aspect. > It looks like a we trying to automate the end to end test functionality > of userspace virtual machine monitor recipes ? I can maybe follow this up > with the test cases that can be plugged into this pytest setup. Yes, there are some end to end tests for containers and I'm adding them for Xen right now. Even just basic boot to prompt is good enough for a start, it doesn't have to be elaborate. That gets us build, kernel, packaging and a runtime smoke test. > > > crossvm is a little bit short for SUMMARY :) > > Ack. Will think of a few appropriate words. > > > With that many crates. Are there really no variation in > > the license at all ? Note: other recipes are guilty of this as > > well, and it's on my list to fix the up as well. > > It is what is available in the crosvm codebase and I think the recipe should > reflect the same. Here is the reference: > https://github.com/google/crosvm/blob/main/LICENSE > Could you please provide insights on what you meant by "variations" ? (or) > what "variations" were you expecting that would seem appropriate to model this > composite sw ? I mean that do all the crates it is pulling in for the build really all have the exact same license ? There's no variations in the open source license at all ? Some of the recent rust recipes in various layers have been scanning all the crates for their licenses and doing a combined license in the main recipe. > > > I'm on the record as really not liking gitsm fetched repos > > (see the ones I've expanded in meta-virtualization previously), > > since it makes it very hard to bump the components of have > > visibility into what is being cloned. > > > I probably won't insist on it, but have you tried breaking > > this down into the individual clones ? > > No. Git submodules exposes > the commits, so I do not understand what you mean by "lack of > visibility". Judging by the context you meant at the > level of the "recipe". Will stick to "gitscm" as it > is a more of a recommendation. But I would like to read your > insights on judging why it makes things difficult. I don't like to pull the main git repository and have the iteration and chaining to the submodes within the fetcher and not visible in the main recipe. I want all the git:// repositories visible, with commit hashes that can be tweaked. Even if that means duplicating the information from the git submodules, it is my preference as I've been biten many times with updating gitsm based recipes. > > > I think there are some stragglers in the layer, and maybe it > > is my memory of PV crossing with PKGV (I prefer we use PV), but > > do we need the ${SRCPV} ? the +git normally triggers it to > > be filled in. > > Need to do some digging on this. Will get back to you on this in another > reply. > > > With the hard dependency on wayland, is this always graphical ? > > There are many cargo feature flags that need a recipe level > package config mapped to the cargo feature flags. The default > target builds graphical support. So I left it at that. I do not have > much clarity on this due the flag explosion. Maybe follow the initial > commit with support for granular feature selection ? That's fine with me. > > > Why the native variant ? > > Well, it is a guilty pleasure that one day maybe it becomes a ||le > alternative to qemu for testing the created distro, referring to the > "run qemu" scripts that oe provides. It is an indicator/signal that this > would be possible, nothing more. Also was easy to test x86-64 builds, > not the runtime though. There's been requests for kvmtool for this sort of thing before and it is what I did with some of the recent container work. There's no harm in putting it there for now, but having a unified way to allow runqemu variation would be required once I've collected up all the options. It just won't be tested right now, and that's fine. No worries if you leave it. Bruce > > Thank you, Bruce. > > Looking forward to your reply. > > Thank you, > Keerthivasan Raghavan
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#9610): https://lists.yoctoproject.org/g/meta-virtualization/message/9610 Mute This Topic: https://lists.yoctoproject.org/mt/117952091/21656 Group Owner: [email protected] Unsubscribe: https://lists.yoctoproject.org/g/meta-virtualization/unsub [[email protected]] -=-=-=-=-=-=-=-=-=-=-=-
