On 23.06.22 20:50, Julien Grall wrote:
Hi Oleksandr,
Hello Julien
Sorry for the late reply.
no problem)
On 11/05/2022 19:47, Oleksandr Tyshchenko wrote:
diff --git a/xen/arch/arm/include/asm/mm.h
b/xen/arch/arm/include/asm/mm.h
+/*
+ * All accesses to the GFN portion of type_info field should always be
+ * protected by the P2M lock. In case when it is not feasible to
satisfy
+ * that requirement (risk of deadlock, lock inversion, etc) it is
important
+ * to make sure that all non-protected updates to this field are
atomic.
Here you say the non-protected updates should be atomic but...
[...]
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 7b1f2f4..c94bdaf 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1400,8 +1400,10 @@ void share_xen_page_with_guest(struct
page_info *page, struct domain *d,
spin_lock(&d->page_alloc_lock);
/* The incremented type count pins as writable or read-only. */
- page->u.inuse.type_info =
- (flags == SHARE_ro ? PGT_none : PGT_writable_page) | 1;
+ page->u.inuse.type_info &= ~(PGT_type_mask | PGT_count_mask);
+ page->u.inuse.type_info |= (flags == SHARE_ro ? PGT_none
+ :
PGT_writable_page) |
+ MASK_INSR(1, PGT_count_mask);
... this is not going to be atomic. So I would suggest to add a
comment explaining why this is fine.
Yes, I should have added your explanation given in V5 why this is fine.
So I propose the following text, do you agree with that being added?
/*
* Please note, the update of type_info field here is not atomic as we use
* Read-Modify-Write operation on it. But currently it is fine because
* the caller of page_set_xenheap_gfn() (which is another place where
* type_info is updated) would need to acquire a reference on the page.
* This is only possible after the count_info is updated *and* there a
barrier
* between the type_info and count_info. So there is no immediate need
to use
* cmpxchg() here.
*/
page_set_owner(page, d);
smp_wmb(); /* install valid domain ptr before updating refcnt. */
@@ -1505,7 +1507,23 @@ int xenmem_add_to_physmap_one(
}
/* Map at new location. */
- rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
+ if ( !p2m_is_ram(t) || !is_xen_heap_mfn(mfn) )
+ rc = guest_physmap_add_entry(d, gfn, mfn, 0, t);
I would expand the comment above to explain why you need a different
path for xenheap mapped as RAM. AFAICT, this is because we need to
call page_set_xenheap_gfn().
agree, I propose the following text, do you agree with that?
/*
* Map at new location. Here we need to map xenheap RAM page differently
* because we need to store the valid GFN and make sure that nothing was
* mapped before (the stored GFN is invalid).
*/
+ else
+ {
+ struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+ p2m_write_lock(p2m);
+ if ( gfn_eq(page_get_xenheap_gfn(mfn_to_page(mfn)),
INVALID_GFN) )
Sorry to only notice it now. This check will also change the behavior
for XENMAPSPACE_shared_info. Now, we are only allowed to map the
shared info once.
I believe this is fine because AFAICT x86 already prevents it. But
this is probably something that ought to be explained in the already
long commit message.
agree, I propose the following text, do you agree with that?
Please note, this patch changes the behavior how the shared_info page
(which is xenheap RAM page) is mapped in xenmem_add_to_physmap_one().
Now, we only allow to map the shared_info at once. The subsequent attempts
to map it will result in -EBUSY, if there is a legitimate use case
we will be able to relax that behavior.
My comments are mainly seeking for clarifications. The code itself
looks correct to me. I can handle the comments on commit to save you a
round trip once we agree on them.
Thank you, that would be much appreciated.
Cheers,
--
Regards,
Oleksandr Tyshchenko