On Fri, May 2, 2025 at 6:15 PM Alexei Starovoitov
<[email protected]> wrote:
> remap_pfn_range() should be avoided.
> See big comment in kernel/events/core.c in map_range().
>
> The following seems to work:
Thanks, this helped a lot.
> but this part is puzzling:
> trailing = page_size - (btf_size % page_size) % page_size;
The intention is to calculate how many bytes of trailing zeroes to
expect while accounting for the case where btf_size % page_size == 0.
I could replace this with a check
end = btf_size + (page_size - 1) / page_size * page_size;
for (i = btf_size; i < end; i++) ...
Better?
In the meantime I've looked at allowing mmap of kmods. I'm not sure
it's worth the effort:
1. Allocations of btf->data in btf_parse_module() would have to use
vmalloc_user() so that allocations are page aligned and zeroed
appropriately. This will be a bit more expensive on systems with large
pages and / or many small kmod BTFs. We could only allow mmap of BTF
>= PAGE_SIZE, at additional complexity.
2. We need to hold a refcount on struct btf for each mmapped kernel
module, so that btf->data doesn't get freed. Taking the refcount can
happen in the sysfs mmap handler, but dropping it is tricky. kernfs /
sysfs doesn't allow using vm_ops->close (see kernfs_fop_mmap). It
seems possible to use struct kernfs_ops->release(), but I don't
understand at all how that deals with multiple mmaps of the same file
in a single process. Also makes me wonder what happens when a process
mmaps the kmod BTF, the module is unloaded and then the process
attempts to access the mmap. My cursory understanding is that this
would raise a fault, which isn't great at all.
If nobody objects / has solutions I'll send a v3 of my original patch
with reviews addressed but without being able to mmap kmods.
Thanks
Lorenz