On 17.02.2026 16:20, Oleksii Kurochko wrote:
> 
> On 2/17/26 3:59 PM, Jan Beulich wrote:
>> On 13.02.2026 17:28, Oleksii Kurochko wrote:
>>> Lay the groundwork for guest timer support by introducing a per-vCPU
>>> virtual timer backed by Xen’s common timer infrastructure.
>>>
>>> The virtual timer is programmed in response to the guest SBI
>>> sbi_set_timer() call and injects a virtual supervisor timer interrupt
>>> into the vCPU when it expires.
>>>
>>> While a dedicated struct vtimer is not strictly required at present,
>>> it is expected to become necessary once SSTC support is introduced.
>>> In particular, it will need to carry additional state such as whether
>>> SSTC is enabled, the next compare value (e.g. for the VSTIMECMP CSR)
>>> to be saved and restored across context switches, and time delta state
>>> (e.g. HTIMEDELTA) required for use cases such as migration. Introducing
>>> struct vtimer now avoids a later refactoring.
>>>
>>> Signed-off-by: Oleksii Kurochko <[email protected]>
>> Acked-by: Jan Beulich <[email protected]>
>> with a question and a remark.
>>
>>> @@ -126,6 +130,8 @@ int arch_vcpu_create(struct vcpu *v)
>>>   
>>>   void arch_vcpu_destroy(struct vcpu *v)
>>>   {
>>> +    vcpu_timer_destroy(v);
>> It feels pretty late to do this, yet I notice vcpu_teardown() doesn't invoke
>> any per-arch function (yet). There's arch_domain_teardown(), though, which
>> technically could do this for all vCPU-s in a domain.
> 
> Why it isn't enough to be implicitly called by domain_destroy()?

It's enough; what I wonder is why it's done so late. Technically anything
that can be torn down in domain_kill() would better be torn down there
already. Just to leave as little baggage as possible to the rest of the
running system. For this particular one: If a domain was killed when some
of its vCPU-s still had timers pending, why would those need keeping on
the respective lists, and why would their handlers still need running
once those timers expire?

Jan

Reply via email to