On 01/08/2025 9:24 pm, Stewart Hildebrand wrote:
> In vcpu_create after scheduler data is allocated, if
> vmtrace_alloc_buffer fails, it will jump to the wrong cleanup label
> resulting in a memory leak.
>
> Move sched_destroy_vcpu and destroy_waitqueue_vcpu to vcpu_teardown.
> Make vcpu_teardown idempotent: deal with NULL unit.
>
> Fix vcpu_runstate_get (called during XEN_SYSCTL_getdomaininfolist post
> vcpu_teardown) when v->sched_unit is NULL.

This is unfortunate.  It feels wrong to be updating stats on a domain
that's in the process of being destroyed, especially as a side effect of
a get call.

But, I wonder if we've uncovered an object lifecycle problem here. 
Previously any vCPU which was addressable in the system (i.e. domid was
addressable, a d->vcpu[x] was not NULL) had a unit.

>
> Fixes: 217dd79ee292 ("xen/domain: Add vmtrace_size domain creation parameter")
> Signed-off-by: Stewart Hildebrand <[email protected]>
> ---
> v1->v2:
> * move cleanup functions to vcpu_teardown
> * renamed, was ("xen: fix memory leak on error in vcpu_create")
> ---
>  xen/common/domain.c     | 14 ++++++--------
>  xen/common/sched/core.c |  5 ++++-
>  2 files changed, 10 insertions(+), 9 deletions(-)
>
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 5241a1629eeb..9c65c2974ea3 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -388,6 +388,8 @@ static int vmtrace_alloc_buffer(struct vcpu *v)
>  static int vcpu_teardown(struct vcpu *v)
>  {
>      vmtrace_free_buffer(v);
> +    sched_destroy_vcpu(v);
> +    destroy_waitqueue_vcpu(v);
>  
>      return 0;
>  }

Along with this, you want a matching:

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 5241a1629eeb..3fd75a6d6784 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1412,8 +1412,6 @@ static void cf_check complete_domain_destroy(struct 
rcu_head *head)
             continue;
         tasklet_kill(&v->continue_hypercall_tasklet);
         arch_vcpu_destroy(v);
-        sched_destroy_vcpu(v);
-        destroy_waitqueue_vcpu(v);
     }
 
     grant_table_destroy(d);


> @@ -448,13 +450,13 @@ struct vcpu *vcpu_create(struct domain *d, unsigned int 
> vcpu_id)
>      }
>  
>      if ( sched_init_vcpu(v) != 0 )
> -        goto fail_wq;
> +        goto fail;
>  
>      if ( vmtrace_alloc_buffer(v) != 0 )
> -        goto fail_wq;
> +        goto fail;
>  
>      if ( arch_vcpu_create(v) != 0 )
> -        goto fail_sched;
> +        goto fail;
>  
>      d->vcpu[vcpu_id] = v;
>      if ( vcpu_id != 0 )
> @@ -472,11 +474,7 @@ struct vcpu *vcpu_create(struct domain *d, unsigned int 
> vcpu_id)
>  
>      return v;
>  
> - fail_sched:
> -    sched_destroy_vcpu(v);
> - fail_wq:
> -    destroy_waitqueue_vcpu(v);
> -
> + fail:
>      /* Must not hit a continuation in this context. */
>      if ( vcpu_teardown(v) )
>          ASSERT_UNREACHABLE();
> diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
> index 2ab4313517c3..fb7c99b05360 100644
> --- a/xen/common/sched/core.c
> +++ b/xen/common/sched/core.c
> @@ -321,7 +321,7 @@ void vcpu_runstate_get(const struct vcpu *v,
>       */
>      unit = is_idle_vcpu(v) ? get_sched_res(v->processor)->sched_unit_idle
>                             : v->sched_unit;
> -    lock = likely(v == current) ? NULL : unit_schedule_lock_irq(unit);
> +    lock = likely(v == current || !unit) ? NULL : 
> unit_schedule_lock_irq(unit);

This is obfuscation for obfuscations sake.  The normal way of writing it
would be:

    if ( v != current && unit )
        lock = unit_schedule_lock_irq(unit);

and that is precisely what the compiler emits.

Moreover it also matches the pattern for how the lock is released, later
in the function.

~Andrew

Reply via email to