On 10.03.2026 21:56, Julian Vetter wrote:
> On 3/10/26 17:09, Jan Beulich wrote:
>> On 10.03.2026 16:56, Julian Vetter wrote:
>>> On 3/10/26 16:36, Jan Beulich wrote:
>>>> On 05.03.2026 14:04, Julian Vetter wrote:
>>>>> @@ -45,7 +52,8 @@ struct ioreq_server {
>>>>> /* Lock to serialize toolstack modifications */
>>>>> spinlock_t lock;
>>>>>
>>>>> - struct ioreq_page ioreq;
>>>>> + ioreq_t *ioreq;
>>>>> + gfn_t ioreq_gfn;
>>>>> struct list_head ioreq_vcpu_list;
>>>>> struct ioreq_page bufioreq;
>>>>
>>>> This change in data arrangement should in principle be independent of the
>>>> step to supporting multiple pages. Hence it should be possible to separate
>>>> out. Problem being that just by looking here and at
>>>> hvm_{,un}map_ioreq_gfn()
>>>> I can't conclude how you get away without the "page" field that struct
>>>> ioreq_page had. If you can get away without, it's not quite clear why the
>>>> field exists in the first place. If it's not needed, dropping it would be
>>>> yet another separate, prereq change. At which point the remaining pair of
>>>> fields could continue to be used, i.e. the change above then wouldn't be
>>>> needed; va could be renamed if need be, and its type changed.
>>>
>>> Thank you again Jan for your feedback! I don't need the page anymore.
>>> When I use vmap(), I don't need to keep track of it, because during
>>> teardown, I can recover it via vmap_to_page(). Currently it's necessary
>>> because in destroy_ring_for_helper we need the page, to be destroyed.
>>> But I see now, on X86 the map_domain_page_global called from
>>> prepare_ring_for_helper actually does vmap(&mfn, 1). So the page is also
>>> from the vmap range. So for the teardown I assume a vmap_to_page() could
>>> be used as well. But I also see there is a special case, if NDEBUG=1,
>>> map_domain_page_global short-circuits to mfn_to_virt() for low MFNs
>>> (putting the VA in the directmap range) and bypassing vmap. In that case
>>>
>>> vmap_to_page() would not work. So, this would be really messy. I would
>>> rather switch the bufioreq also to an explicitly vmap()'ed page, then we
>>> could remove the page pointer and both cases would be aligned again.
>>
>> That's an option. Yet are you aware of domain_page_map_to_mfn()? Perhaps
>> that's what you want to switch to using in the patch removing the "page"
>> field. To then, conditionally or uniformly, switch to vmap_to_{mfn,page}()
>> when doing the multi-page work in the subsequent patch.
>
> Yes, thank you. I saw this function, but I was wondering whether it's a
> good idea to wrap the va in two translation functions like:
>
> struct page_info *page = mfn_to_page(domain_page_map_to_mfn(va));
There's no fundamental problem with that (we have similar constructs elsewhere,
I think), but ...
> and then calling destroy_ring_for_helper() with it. But I will have a
> look, and this way we would be again aligned between the two cases. So,
> maybe it's the cleanest way.
... does destroy_ring_for_helper() actually need to have the page passed in?
It's prepare_ring_for_helper() which calls __map_domain_page_global(), so
destroy_ring_for_helper() could well obtain the MFN / page itself (using
the above construct). VM event and vPL011 code also only ever use the page
pointer supplied by prepare_ring_for_helper() to pass into
destroy_ring_for_helper().
Jan