On Tue, Feb 03, 2026 at 10:18:22AM +0100, Lukas Straub wrote:
> On Mon, 2 Feb 2026 09:26:06 -0500
> Peter Xu <[email protected]> wrote:
> 
> > On Fri, Jan 30, 2026 at 11:24:02AM +0100, Lukas Straub wrote:
> > > On Tue, 27 Jan 2026 15:49:31 -0500
> > > Peter Xu <[email protected]> wrote:
> > >   
> > > > On Sun, Jan 25, 2026 at 09:40:11PM +0100, Lukas Straub wrote:  
> > > > > +void migration_test_add_colo(MigrationTestEnv *env)
> > > > > +{
> > > > > +    if (!env->has_kvm) {
> > > > > +        g_test_skip("COLO requires KVM accelerator");
> > > > > +        return;
> > > > > +    }    
> > > > 
> > > > I'm OK if you want to explicitly bypass others, but could you 
> > > > explanation
> > > > why?
> > > > 
> > > > Thanks,
> > > >   
> > > 
> > > It used to hang with TCG. Now it crashes, since
> > > migration_bitmap_sync_precopy assumes bql is held. Something for later.  
> > 
> > If we want to keep COLO around and be serious, let's try to make COLO the
> > same standard we target for migration in general whenever possible.  We
> > shouldn't randomly workaround bugs.  We should fix it.
> > 
> > It looks to me there's some locking issue instead.
> > 
> > Iterator's complete() requires BQL.  Would a patch like below makes sense
> > to you?
> > 
> > diff --git a/migration/colo.c b/migration/colo.c
> > index db783f6fa7..b3ea137120 100644
> > --- a/migration/colo.c
> > +++ b/migration/colo.c
> > @@ -458,8 +458,8 @@ static int 
> > colo_do_checkpoint_transaction(MigrationState *s,
> >      /* Note: device state is saved into buffer */
> >      ret = qemu_save_device_state(fb);
> >  
> > -    bql_unlock();
> >      if (ret < 0) {
> > +        bql_unlock();
> >          goto out;
> >      }
> >  
> > @@ -473,6 +473,9 @@ static int 
> > colo_do_checkpoint_transaction(MigrationState *s,
> >       */
> >      qemu_savevm_live_state(s->to_dst_file);
> >  
> > +    /* Save live state requires BQL */
> > +    bql_unlock();
> > +
> >      qemu_fflush(fb);
> >  
> >      /*
> 
> I already tested that and it works. However, we have to be very careful
> around the locking here and I don't think it is safe to take the bql on
> the primary here:
> 
> The secondary has the bql held at this point:

This is definitely an interesting piece of code... one question:

> 
>     colo_receive_check_message(mis->from_src_file,
>                        COLO_MESSAGE_VMSTATE_SEND, &local_err);
>     ...
>     bql_lock();
>     cpu_synchronize_all_states();

Why this is needed at all? ^^^^^^^^^^^^^^^

The qemu_loadvm_state_main() line right below should only load RAM.  I
don't see how it has anything to do with CPU register states..

>     ret = qemu_loadvm_state_main(mis->from_src_file, mis, errp);
>     bql_unlock();
> 
> On the primary there is a filter-mirror mirroring incoming packets to
> the secondary filter-redirector. However since the secondary migration
> holds bql the receiving filter is blocked and will not receive anything
> from the socket. Thus filter-mirror on the primary also may get blocked
> during send and block the mainloop (It uses blocking IO).

Hmm... could you explain why a blocking IO operation to mirror some packets
require holding BQL?  This sounds wrong on its own.

> 
> Now if the primary migration thread wants to take the bql it will
> deadlock.
> 
> So I think this is something to fix in a separate series since it is
> more involved.

Yes it might be involved, but this is really not something like "let's make
it simple for now and improve it later".  This is "OK this function
_requires_ this lock, but let's not take this lock and leave it for
later".  It's not something we can put aside, afaiu.  We should really fix
it.

How far do you think we can fix it?  Could you explain the problem better?

It might be helpful if you can reproduce the hang, then attach the logs
from both QEMU on a full thread backtrace dump.  I'll see what I can help.

Thanks,

-- 
Peter Xu


Reply via email to