On Wed, Mar 04, 2026 at 04:48:07PM +0100, Jan Beulich wrote:
> On 04.03.2026 16:38, Roger Pau Monné wrote:
> > On Thu, Feb 26, 2026 at 10:01:45AM +0100, Jan Beulich wrote:
> >> --- a/xen/common/domain.c
> >> +++ b/xen/common/domain.c
> >> @@ -1475,7 +1475,7 @@ static void cf_check complete_domain_des
> >>  {
> >>      struct domain *d = container_of(head, struct domain, rcu);
> >>      struct vcpu *v;
> >> -    int i;
> >> +    unsigned int i;
> >>  
> >>      /*
> >>       * Flush all state for the vCPU previously having run on the current 
> >> CPU.
> >> @@ -1485,7 +1485,7 @@ static void cf_check complete_domain_des
> >>       */
> >>      sync_local_execstate();
> >>  
> >> -    for ( i = d->max_vcpus - 1; i >= 0; i-- )
> >> +    for ( i = d->max_vcpus; i-- > 0; )
> > 
> > Is there any reason we need to do those loops backwards?
> > 
> > I would rather do:
> > 
> > for ( i = 0; i < d->max_vcpus; i++ )
> > 
> > ?
> 
> I think it's better to keep like this. The latter of the loops would better
> clear d->vcpu[i] (to not leave a dangling pointer), and there may be code
> which assumes that for ordinary domains d->vcpu[] is populated contiguously.
> Hardly any code should touch the vCPU-s of a domain destructed this far, but
> still better safe than sorry, I guess.

Yes, you are right.  sched_destroy_vcpu() relies on this specific
top-down calling.

Since you are adjusting the code anyway, it might be worth writing
down that some functions (like sched_destroy_vcpu()) expect to be
called with a top-down order of vCPUs.

For the change itself:

Acked-by: Roger Pau Monné <[email protected]>

Thanks, Roger.

Reply via email to