On 11/4/25 07:15, Jürgen Groß wrote: > On 15.10.25 21:57, Val Packett wrote: >> Starting a virtio backend in a PV domain would panic the kernel in >> alloc_ioreq, trying to dereference vma->vm_private_data as a pages >> pointer when in reality it stayed as PRIV_VMA_LOCKED. >> >> Fix by allocating a pages array in mmap_resource in the PV case, >> filling it with page info converted from the pfn array. This allows >> ioreq to function successfully with a backend provided by a PV dom0. >> >> Signed-off-by: Val Packett <[email protected]> >> --- >> I've been porting the xen-vhost-frontend[1] to Qubes, which runs on amd64 >> and we (still) use PV for dom0. The x86 part didn't give me much trouble, >> but the first thing I found was this crash due to using a PV domain to host >> the backend. alloc_ioreq was dereferencing the '1' constant and panicking >> the dom0 kernel. >> >> I figured out that I can make a pages array in the expected format from the >> pfn array where the actual memory mapping happens for the PV case, and with >> the fix, the ioreq part works: the vhost frontend replies to the probing >> sequence and the guest recognizes which virtio device is being provided. >> >> I still have another thing to debug: the MMIO accesses from the inner driver >> (e.g. virtio_rng) don't get through to the vhost provider (ioeventfd does >> not get notified), and manually kicking the eventfd from the frontend >> seems to crash... Xen itself?? (no Linux panic on console, just a freeze and >> quick reboot - will try to set up a serial console now) > > IMHO for making the MMIO accesses work you'd need to implement ioreq-server > support for PV-domains in the hypervisor. This will be a major endeavor, so > before taking your Linux kernel patch I'd like to see this covered.
Could Xen return an error instead of crashing? -- Sincerely, Demi Marie Obenour (she/her/hers)
OpenPGP_0xB288B55FFF9C22C1.asc
Description: OpenPGP public key
OpenPGP_signature.asc
Description: OpenPGP digital signature
