Hi Jens, > On 3 Dec 2025, at 11:32, Jens Wiklander <[email protected]> wrote: > > Hi, > > On Wed, Dec 3, 2025 at 10:36 AM Bertrand Marquis > <[email protected]> wrote: >> >> Hi Jens, >> >>> On 3 Dec 2025, at 09:50, Jens Wiklander <[email protected]> wrote: >>> >>> Hi Bertrand, >>> >>> On Tue, Dec 2, 2025 at 5:50 PM Bertrand Marquis >>> <[email protected]> wrote: >>>> >>>> Hi Jens, >>>> >>>>> On 2 Dec 2025, at 15:08, Jens Wiklander <[email protected]> wrote: >>>>> >>>>> Hi Bertrand, >>>>> >>>>> On Thu, Nov 27, 2025 at 4:52 PM Bertrand Marquis >>>>> <[email protected]> wrote: >>>>>> >>>>>> Rework how Xen accesses the RX/TX buffers shared with the SPMC so that >>>>>> ownership and locking are handled centrally. >>>>>> >>>>>> Move the SPMC RX/TX buffer bases into ffa_rxtx.c as >>>>>> ffa_spmc_rx/ffa_spmc_tx, >>>>>> protect them with dedicated ffa_spmc_{rx,tx}_lock spinlocks and expose >>>>>> ffa_rxtx_spmc_{rx,tx}_{acquire,release}() helpers instead of the global >>>>>> ffa_rx/ffa_tx pointers and ffa_{rx,tx}_buffer_lock. >>>>>> >>>>>> The RX helpers now always issue FFA_RX_RELEASE when we are done >>>>>> consuming data from the SPMC, so partition-info enumeration and shared >>>>>> memory paths release the RX buffer on all exit paths. The RX/TX mapping >>>>>> code is updated to use the descriptor offsets (rx_region_offs and >>>>>> tx_region_offs) rather than hard-coded structure layout, and to use the >>>>>> TX acquire/release helpers instead of touching the TX buffer directly. >>>>>> >>>>>> Signed-off-by: Bertrand Marquis <[email protected]> >>>>>> --- >>>>>> xen/arch/arm/tee/ffa.c | 22 +----- >>>>>> xen/arch/arm/tee/ffa_partinfo.c | 40 +++++----- >>>>>> xen/arch/arm/tee/ffa_private.h | 18 ++--- >>>>>> xen/arch/arm/tee/ffa_rxtx.c | 132 +++++++++++++++++++++++++------- >>>>>> xen/arch/arm/tee/ffa_shm.c | 26 ++++--- >>> >>> [snip] >>> >>>>>> >>>>>> -void ffa_rxtx_destroy(void) >>>>>> +void *ffa_rxtx_spmc_rx_acquire(void) >>>>>> +{ >>>>>> + ASSERT(!spin_is_locked(&ffa_spmc_rx_lock)); >>>>> >>>>> Is it invalid for two CPUs to concurrently try to acquire the RX buffer? >>>> >>>> No the RX buffer or the TX buffer with the SPMC should only be used by >>>> one CPU at a time as there cannot be any concurrent operations using it. >>> >>> What if two guests call FFA_PARTITION_INFO_GET concurrently? Both can >>> succeed in acquiring their RX buffer so they can call >>> ffa_get_sp_partinfo() concurrently, and the assert might be triggered. >>> We have a similar problem with FFA_RXTX_MAP_64 and >>> ffa_rxtx_spmc_tx_acquire(). Contention on the spinlocks for the rx and >>> tx buffers should be valid. If we can't allow a guest to block here, >>> we should return an error and let the guest retry. >> >> i am not sure i am following anymore. >> The assert is just there to ensure that the lock is not already taken. > > But it could already be taken by another CPU. > >> The function is then taking the lock and not releasing it until release >> is called which is ensuring that only one vcpu at a time is using the >> rx buffer. Did i miss something here ? > > Only one CPU at a time should use the spmc rx buffer, but others might > try and should be blocked in spin_lock() rather than ASSERT. > >> >> for rxtx map we do call tx_acquire so we lock the buffer. >> >> Now we might have a race condition between in rxtx_map and unmap >> where i should take the rx_lock and the tx_lock of the guest to prevent >> concurrent usage of the vm rxtx buffer. I will fix that one. > > Yes, you're right, good catch. > >> >> Please tell me for the spmc rxtx buffers as i am not sure i am following >> what you mean there :-) > > Each guest has its own rxtx buffer, so the spinlock here is just in > case the guest didn't synchronize its CPUs before calling. But the > SPMC rxtx buffers are for Xen, so a guest can't be sure that no other > guest is holding the spinlock.
In fact i just saw why this is wrong. I must not unlock at the end of spmc_rx/tx_acquire so that only one at a time is taking the rxtx buffers with the spmc and they become available only when current user is done. > > Two guests can independently call FFA_RXTX_MAP_64 and then call > ffa_rxtx_spmc_tx_acquire() more or less at the same time. > > I you remove the "ASSERT(!spin_is_locked(...));" from > ffa_rxtx_spmc_tx_acquire() and ffa_rxtx_spmc_rx_acquire() it should be > OK. My current conclusion was that i need to remove the unlock in acquire so that i spin_lock in acquire and spin_unlock in release. In fact to prevent issues with rxtx vm buffers, I will add tx_acquire and tx_release functions for the vm tx buffer and give the buffer address and size as return to the caller. This will prevent any access to tx or page_count outside of a locked section and remove potential race conditions. I will rework this patch and the next one to end up with a cleaner handling of vm and spmc rxtx buffers. Cheers Bertrand > > Cheers, > Jens
