Fabiano Rosas <faro...@suse.de> wrote:
> Juan Quintela <quint...@redhat.com> writes:
>
>> Fabiano Rosas <faro...@suse.de> wrote:
>>> The channels_ready semaphore is a global variable not linked to any
>>> single multifd channel. Waiting on it only means that "some" channel
>>> has become ready to send data. Since we need to address the channels
>>> by index (multifd_send_state->params[i]), that information adds
>>> nothing of value.
>>
>> NAK.
>>
>> I disagree here O:-)
>>
>> the reason why that channel exist is for multifd_send_pages()
>>
>> And simplifying the function what it does is:
>>
>> sem_wait(channels_ready);
>>
>> for_each_channel()
>>    look if it is empty()
>>
>> But with the semaphore, we guarantee that when we go to the loop, there
>> is a channel ready, so we know we donat busy wait searching for a
>> channel that is free.
>>
>
> Ok, so that clarifies the channels_ready usage.
>
> Now, thinking out loud... can't we simply (famous last words) remove the
> "if (!p->pending_job)" line and let multifd_send_pages() prepare another
> payload for the channel? That way multifd_send_pages() could already
> return and the channel would see one more pending_job and proceed to
> send it.

No.

See the while loop in multifd_send_thread()

    while (true) {
        qemu_mutex_lock(&p->mutex);

        if (p->pending_job) {

            ......
            Do things with parts of the struct that are shared with the
            migration thread
            ....
            qemu_mutex_unlock(&p->mutex);

            // Drop the lock
            // Do mothing things on the channel, pending_job means that
            // it is working
            // mutex dropped means that migration_thread can use the
            // shared variables, but not the channel

            // now here we decrease pending_job, so main thread can
            // change things as it wants
            // But we need to take the lock first.
            qemu_mutex_lock(&p->mutex);
            p->pending_job--;
            qemu_mutex_unlock(&p->mutex);
            ......
        }
    }

This is a common pattern for concurrency.  To not have your mutex locked
too long, you put a variable (that can only be tested/changed with the
lock) to explain that the "channel" is busy, the struct that lock
protects is not (see how we make sure that the channel don't use any
variable of the struct without the locking).


> Or, since there's no resending anyway, we could dec pending_jobs earlier
> before unlocking the channel. It seems the channel could be made ready
> for another job as soon as the packet is built and the lock is released.

pending_jobs can be transformed in a bool.  We just need to make sure
that we didn't screw it in _sync().

> That way we could remove the semaphore and let the mutex do the job of
> waiting for the channel to become ready.

As said, we don't want that.  Because channels can go a different speeds
due to factors outside of our control.

If the semaphore bothers you, you can change it to to a condition
variable, but you just move the complexity from one side to the other
(Initial implementation had a condition variable, but Paolo said that
the semaphore is more efficient, so he won)

>> Notice that I fully agree that the sem is not needed for locking.
>> Locking is done with the mutex.  It is just used to make sure that we
>> don't busy loop on that loop.
>>
>> And we use a sem, because it is the easiest way to know how many
>> channels are ready (even when we only care if there is one when we
>> arrive to that code).
>
> Yep, that's fine, no objection here.
>
>>
>> We lost count of that counter, and we fixed that here:
>
> Kind of, because we still don't wait on it during cleanup. If we did,
> then we could have an assert at the end to make sure this doesn't
> regress again.
>
> And maybe use channels_ready.count for other types of introspection.

We could.

>> commit d2026ee117147893f8d80f060cede6d872ecbd7f
>> Author: Juan Quintela <quint...@redhat.com>
>> Date:   Wed Apr 26 12:20:36 2023 +0200
>>
>>     multifd: Fix the number of channels ready
>>
>>     We don't wait in the sem when we are doing a sync_main.  Make it
>>
>> And we were addressing the problem that some users where finding that we
>> were busy waiting on that loop.
>>
>>> The channel being addressed is not necessarily the
>>> one that just released the semaphore.
>>
>> We only care that there is at least one free.  We are going to search
>> the next one.
>>
>> Does this explanation makes sense?
>
> It does, thanks for taking the time to educate us =)
>
> I made some suggestions above, but I might be missing something still.

I think that the current code is already quite efficient.

But will have to think if that improves anything major.

Later, Juan.


Reply via email to