On Wed, Oct 01, 2025 at 01:36:16PM -0700, Richard Henderson wrote:
> On 9/29/25 07:42, Peter Maydell wrote:
> > When a vCPU is created, it typically calls cpu_address_space_init()
> > one or more times to set up its address spaces. We don't currently
> > do anything to destroy these address spaces, which means that we
> > will leak them on a vcpu hot-plug -> hot-unplug cycle.
> > 
> > This patchset fixes the leak by replacing the current
> > cpu_address_space_destroy() (which has an awkward API, includes
> > a bug, and is never called from anywhere) with a new
> > cpu_destroy_address_spaces() which cleans up all the ASes a CPU
> > has and is called from generic unrealize code.
> > 
> > Patch 1 is just a comment improvement to clarify that
> > address_space_destroy() defers most of its work to RCU and you
> > can't free the memory for the AS struct itself until it's done.
> > 
> > Patch 2 is from Peter Xu; we need to be able to do "destroy and
> > free an AS" via RCU, and at the moment you can't do that.
> > 
> > Patch 3 is the bugfix proper.
> 
> So... I wonder this is really the right direction.
> 
> To be specific:
> 
> Over in split-accel-land we recently had a bit of a kerfuffle over TCG
> registering its MemoryListener with all of the per-cpu address spaces and
> HVF not doing so.  Things get even more complicated when one finds out that
> some MemoryListener callbacks are only used with "global" address spaces and
> some are only used with the per-cpu address spaces.

I only have a very preliminary understanding on this, so please bare with
me if I'll make silly mistakes...

I was expecting QEMU provides both the global view (address_space_memory),
and the per-vcpu view.  Then we can register to any of them on demand.
Then the global views can be the so-called "board model" mentioned below.

Is it not the case?

The root MR is also shared in this case, making sure the address space
operations internally will share the same flatview.

> 
> On reflection, it seems to me that no address spaces should be owned by the
> cpus.  All address spaces should be owned by the board model, and it should
> be the board model that configures the address spaces used by each cpu.
> 
> If we do this then address spaces, and even the array of address spaces, may
> be created once by the board, shared between all (relevant) cpus, and not
> destroyed by cpu unplug.
> 
> Moreover, in the context of virtualization, there's now exactly one address
> space (since Arm Secure and Tag memory spaces are not required or supported
> except by emulation; I assume the same is true for x86 smm), and the
> aforementioned kerfuffle goes away.  There's no difference between global
> and per-cpu address spaces.
> 
> Anyway, it seems like this provides the same flexibility in the complex
> heterogeneous case and simplifies things for the homogeneous virtualization
> case.

We have another question to ask besides this design discussion: Peter's
series here solidly fixes a memory leak that can easily reproduce with
x86_64 + KVM on cpu hotplugs.

Should we still merge it first considering it didn't change how we manage
per-vcpu address spaces, but only fixing it?  Then anything like a big
overhaul can happen on top too.

Thanks,

-- 
Peter Xu


Reply via email to