On Wed, Oct 05, 2022 at 12:18:00PM +0100, Dr. David Alan Gilbert wrote:
> * Peter Xu ([email protected]) wrote:
> > On Tue, Oct 04, 2022 at 02:55:10PM +0100, Dr. David Alan Gilbert wrote:
> > > * Peter Xu ([email protected]) wrote:
> > > > Don't take the bitmap mutex when sending pages, or when being throttled
> > > > by
> > > > migration_rate_limit() (which is a bit tricky to call it here in ram
> > > > code,
> > > > but seems still helpful).
> > > >
> > > > It prepares for the possibility of concurrently sending pages in >1
> > > > threads
> > > > using the function ram_save_host_page() because all threads may need the
> > > > bitmap_mutex to operate on bitmaps, so that either sendmsg() or any
> > > > kind of
> > > > qemu_sem_wait() blocking for one thread will not block the other from
> > > > progressing.
> > > >
> > > > Signed-off-by: Peter Xu <[email protected]>
> > >
> > > I generally dont like taking locks conditionally; but this kind of looks
> > > OK; I think it needs a big comment on the start of the function saying
> > > that it's called and left with the lock held but that it might drop it
> > > temporarily.
> >
> > Right, the code is slightly hard to read, I just didn't yet see a good and
> > easy solution for it yet. It's just that we may still want to keep the
> > lock as long as possible for precopy in one shot.
> >
> > >
> > > > ---
> > > > migration/ram.c | 42 +++++++++++++++++++++++++++++++-----------
> > > > 1 file changed, 31 insertions(+), 11 deletions(-)
> > > >
> > > > diff --git a/migration/ram.c b/migration/ram.c
> > > > index 8303252b6d..6e7de6087a 100644
> > > > --- a/migration/ram.c
> > > > +++ b/migration/ram.c
> > > > @@ -2463,6 +2463,7 @@ static void
> > > > postcopy_preempt_reset_channel(RAMState *rs)
> > > > */
> > > > static int ram_save_host_page(RAMState *rs, PageSearchStatus *pss)
> > > > {
> > > > + bool page_dirty, release_lock = postcopy_preempt_active();
> > >
> > > Could you rename that to something like 'drop_lock' - you are taking the
> > > lock at the end even when you have 'release_lock' set - which is a bit
> > > strange naming.
> >
> > Is there any difference on "drop" or "release"? I'll change the name
> > anyway since I definitely trust you on any English comments, but please
> > still let me know - I love to learn more on those! :)
>
> I'm not sure 'drop' is much better either; I was struggling to find a
> good nam.
I can also call it "preempt_enabled".
Actually I can directly replace it with calling postcopy_preempt_active()
always but I just want to make it crystal clear that the value is not
changing and lock & unlock are always paired - in our case I think it is
not changing, but the var helps to be 100% sure there'll be no possible bug
on e.g. deadlock caused by state changing.
>
> > >
> > > > int tmppages, pages = 0;
> > > > size_t pagesize_bits =
> > > > qemu_ram_pagesize(pss->block) >> TARGET_PAGE_BITS;
> > > > @@ -2486,22 +2487,41 @@ static int ram_save_host_page(RAMState *rs,
> > > > PageSearchStatus *pss)
> > > > break;
> > > > }
> > > >
> > > > + page_dirty = migration_bitmap_clear_dirty(rs, pss->block,
> > > > pss->page);
> > > > + /*
> > > > + * Properly yield the lock only in postcopy preempt mode
> > > > because
> > > > + * both migration thread and rp-return thread can operate on
> > > > the
> > > > + * bitmaps.
> > > > + */
> > > > + if (release_lock) {
> > > > + qemu_mutex_unlock(&rs->bitmap_mutex);
> > > > + }
> > >
> > > Shouldn't the unlock/lock move inside the 'if (page_dirty) {' ?
> >
> > I think we can move into it, but it may not be as optimal as keeping it
> > as-is.
> >
> > Consider a case where we've got the bitmap with continous zero bits.
> > During postcopy, the migration thread could be spinning here with the lock
> > held even if it doesn't send a thing. It could still block the other
> > return path thread on sending urgent pages which may be outside the zero
> > zones.
>
> OK, that reason needs commenting then - you're going to do a lot of
> release/take pairs in that case which is going to show up as very hot;
> so hmm, if ti was just for that type of 'yield' behaviour you wouldn't
> normally do it for each bit.
Hold on.. I think my assumption won't easily trigger, because at the end of
the loop we'll try to look for the next "dirty" page. So continuously
clean pages are unlikely, or I even think it's impossible because we're
holding the mutex during scanning and clear-dirty, so no one will be able
to flip the bit.
So yeah I think it's okay to move it into "page_dirty", but since we'll
mostly always go into dirty maybe it's just that it won't help a lot
either, because it'll be mostly the same as keeping it outside?
--
Peter Xu