On Tue, Aug 17, 2021 at 12:38 AM Samuel Thibault <samuel.thiba...@gnu.org> wrote: > The root pci-arbiter uses libpciaccess' x86 backend to access PCI
On Tue, Aug 17, 2021 at 9:47 PM Joan Lledó <jlle...@mailfence.com> wrote: > Yes, and the arbiter can play two roles: root arbiter, which uses x86 > module in libpciacces; and nested arbiter, which uses the hurd module in > libpciaccess. > > > The hardware devices connected via PCI are available (to the PCI arbiter) > > as Mach devices > > Actually, the devices are available to the arbiter as libpciaccess devices Thank you both for explaining, this is the part that I was missing: that the arbiter itself uses libpciaccess, and that it cannot map the devices directly, it has to map /dev/mem. To me it sounds like libpciaccess should have a Hurd-specific API addition that would let the user get the memory object backing the mapping created by device_map_region (). I.e., device_map_region () is a cross-platform API that maps the device memory into your address space (right?), but on the Hurd there'd also be a way to actually get the memory object it would map (and map it yourself if you so choose, or do something else). It would be the job of that libpciaccess API to make this object have the right offset and everything, so that the caller wouldn't have to worry about that. If I understand this right, its Hurd backend already gets the memory object with the right offset and size, and would return that directly; while the x86 backend would either have to use the unimplemented-as-of-now offset parameter in device_map (), or create and return an appropriate proxy from that API. In memory_object_create_proxy (), the kernel would take care of short-circuiting nested proxy creation to make that a non-issue. This will allow netfs_get_filemap (VM_PROT_READ) to create another proxy just for enforcing read-only access without worrying that the object might already be a proxy. Does that sound remotely sensible? :) Please keep in mind that while (I think) I understand Mach VM, I have very little idea about PCI. I'm (obviously) not in a position to decide what's best for libpciaccess and the PCI arbiter. Sergey