On 07.01.25 17:38, Jan Beulich wrote:
On 07.01.2025 17:07, Jürgen Groß wrote:
On 07.01.25 16:34, Jan Beulich wrote:
On 07.01.2025 11:17, Juergen Gross wrote:
@@ -479,8 +486,13 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, 
evtchn_port_t port)
       */
       virq = array_index_nospec(virq, ARRAY_SIZE(v->virq_to_evtchn));
- if ( virq_is_global(virq) && (vcpu != 0) )
-        return -EINVAL;
+    if ( virq_is_global(virq) )
+    {
+        if ( get_global_virq_handler(virq) != d )
+            return -EBUSY;

Hmm. While this eliminates the problem for the common, race free case,
the handler changing right after the check would still mean the bind
would succeed.

Are you fine with me adding a paragraph to the commit message saying
that a future patch will handle this case?

This future patch is patch 4 of the series, which will need to be
modified to check the handling domain inside the event_lock.

I think this would be okay, so long as patches 2...4 are then also all
committed together.

Plus this way you're breaking a case that afaict has been working so
far: The bind happening before the setting of the handler. With a lot
of unrelated if-s and when-s this could e.g. be of interest when
considering a re-startable Xenstore domain. The one to take over could
start first, obtain state from the original one while that's still
active, and be nominated the handler of the global vIRQ only in the
last moment.

This is a racy situation, too. If the old domain receives the virq after
sending the state, this would need to be handled by transferring the virq
information to the new domain, which can result in a never ending story.

This is the reason why the domain state bitmap is reset to contain all
existing domains to be flagged as "changed", as otherwise a change might
get lost.

I'd rather be able to handle today's use cases in a sane way than to try
handling any weird future use cases which we don't know yet.

I think today's behavior is more or less insane and the new behavior is
much easier to understand and more intuitive.

Hmm, I'd like to leave this then for input by other maintainers.

Just one additional remark to your re-startable xenstore domain scenario
above:

It wouldn't be possible today to do the same with a xenstore daemon in
e.g. dom0, as binding the virq another time from within the same domain
would be rejected by the hypervisor. In the xenstore domain case you'd
either need the old domain to ask dom0 to change the handler (so much
about less communication needed), or you'd need to give the xenstore domain
the right to do the handler change itself, requiring to use flask or to
modify the dummy XSM rights of the xenstore domain.


Juergen

Attachment: OpenPGP_0xB0DE9DD628BF132F.asc
Description: OpenPGP public key

Attachment: OpenPGP_signature.asc
Description: OpenPGP digital signature

Reply via email to