On Mon, 2 Feb 2026 09:26:06 -0500
Peter Xu <[email protected]> wrote:

> On Fri, Jan 30, 2026 at 11:24:02AM +0100, Lukas Straub wrote:
> > On Tue, 27 Jan 2026 15:49:31 -0500
> > Peter Xu <[email protected]> wrote:
> >   
> > > On Sun, Jan 25, 2026 at 09:40:11PM +0100, Lukas Straub wrote:  
> > > > +void migration_test_add_colo(MigrationTestEnv *env)
> > > > +{
> > > > +    if (!env->has_kvm) {
> > > > +        g_test_skip("COLO requires KVM accelerator");
> > > > +        return;
> > > > +    }    
> > > 
> > > I'm OK if you want to explicitly bypass others, but could you explanation
> > > why?
> > > 
> > > Thanks,
> > >   
> > 
> > It used to hang with TCG. Now it crashes, since
> > migration_bitmap_sync_precopy assumes bql is held. Something for later.  
> 
> If we want to keep COLO around and be serious, let's try to make COLO the
> same standard we target for migration in general whenever possible.  We
> shouldn't randomly workaround bugs.  We should fix it.
> 
> It looks to me there's some locking issue instead.
> 
> Iterator's complete() requires BQL.  Would a patch like below makes sense
> to you?
> 
> diff --git a/migration/colo.c b/migration/colo.c
> index db783f6fa7..b3ea137120 100644
> --- a/migration/colo.c
> +++ b/migration/colo.c
> @@ -458,8 +458,8 @@ static int colo_do_checkpoint_transaction(MigrationState 
> *s,
>      /* Note: device state is saved into buffer */
>      ret = qemu_save_device_state(fb);
>  
> -    bql_unlock();
>      if (ret < 0) {
> +        bql_unlock();
>          goto out;
>      }
>  
> @@ -473,6 +473,9 @@ static int colo_do_checkpoint_transaction(MigrationState 
> *s,
>       */
>      qemu_savevm_live_state(s->to_dst_file);
>  
> +    /* Save live state requires BQL */
> +    bql_unlock();
> +
>      qemu_fflush(fb);
>  
>      /*

I already tested that and it works. However, we have to be very careful
around the locking here and I don't think it is safe to take the bql on
the primary here:

The secondary has the bql held at this point:

    colo_receive_check_message(mis->from_src_file,
                       COLO_MESSAGE_VMSTATE_SEND, &local_err);
    ...
    bql_lock();
    cpu_synchronize_all_states();
    ret = qemu_loadvm_state_main(mis->from_src_file, mis, errp);
    bql_unlock();

On the primary there is a filter-mirror mirroring incoming packets to
the secondary filter-redirector. However since the secondary migration
holds bql the receiving filter is blocked and will not receive anything
from the socket. Thus filter-mirror on the primary also may get blocked
during send and block the mainloop (It uses blocking IO).

Now if the primary migration thread wants to take the bql it will
deadlock.

So I think this is something to fix in a separate series since it is
more involved.

Regards,
Lukas Straub

> 
> > 
> > #6  0x00007ffff7471517 in __assert_fail
> >     (assertion=assertion@entry=0x555555f17aee "bql_locked() != locked", 
> > file=file@entry=0x555555f17ab0 "../system/cpus.c", line=line@entry=535, 
> > function=function@entry=0x55555609bfd0 <__PRETTY_FUNCTION__.9> 
> > "bql_update_status") at ./assert/assert.c:105
> > #7  0x0000555555b09f1e in bql_update_status (locked=locked@entry=false) at 
> > ../system/cpus.c:535
> > #8  0x0000555555ec60e7 in qemu_mutex_pre_unlock (mutex=0x555557166700 
> > <bql>, file=0x555555efe1dc "../cpu-common.c", line=164) at 
> > ../util/qemu-thread-common.h:57
> > #9  qemu_mutex_pre_unlock (line=164, file=0x555555efe1dc "../cpu-common.c", 
> > mutex=0x555557166700 <bql>) at ../util/qemu-thread-common.h:48
> > #10 qemu_cond_wait_impl (cond=0x5555571442c0 <qemu_work_cond>, 
> > mutex=0x555557166700 <bql>, file=0x555555efe1dc "../cpu-common.c", 
> > line=164) at ../util/qemu-thread-posix.c:224
> > #11 0x000055555589e6c8 in do_run_on_cpu (cpu=<optimized out>, 
> > func=<optimized out>, data=..., mutex=0x555557166700 <bql>) at 
> > ../cpu-common.c:164
> > #12 0x0000555555b17a06 in memory_global_after_dirty_log_sync () at 
> > ../system/memory.c:2938
> > #13 0x0000555555b55b47 in migration_bitmap_sync (rs=0x7fffe8001340, 
> > last_stage=last_stage@entry=true) at ../migration/ram.c:1157
> > #14 0x0000555555b56721 in migration_bitmap_sync_precopy 
> > (last_stage=last_stage@entry=true) at ../migration/ram.c:1195
> > #15 0x0000555555b59f8a in ram_save_complete (f=0x5555575db620, 
> > opaque=<optimized out>) at ../migration/ram.c:3381
> > #16 0x0000555555b5e4f5 in qemu_savevm_complete (se=se@entry=0x5555574c0d80, 
> > f=f@entry=0x5555575db620) at ../migration/savevm.c:1521
> > #17 0x0000555555b60437 in qemu_savevm_state_complete_precopy_iterable 
> > (f=f@entry=0x5555575db620, in_postcopy=in_postcopy@entry=false) at 
> > ../migration/savevm.c:1627
> > #18 0x0000555555b60a4f in qemu_savevm_state_complete_precopy 
> > (iterable_only=true, f=0x5555575db620) at ../migration/savevm.c:1719
> > #19 qemu_savevm_live_state (f=0x5555575db620) at ../migration/savevm.c:1855
> > #20 0x0000555555b65ed9 in colo_do_checkpoint_transaction (fb=<optimized 
> > out>, bioc=<optimized out>, s=0x5555574c0070) at ../migration/colo.c:474
> > #21 colo_process_checkpoint (s=0x5555574c0070) at ../migration/colo.c:592
> > #22 migrate_start_colo_process (s=0x5555574c0070) at ../migration/colo.c:655
> > #23 0x0000555555b4971e in migration_iteration_finish (s=0x5555574c0070) at 
> > ../migration/migration.c:3297
> > #24 migration_thread (opaque=opaque@entry=0x5555574c0070) at 
> > ../migration/migration.c:3584
> > #25 0x0000555555ec58c0 in qemu_thread_start (args=0x5555576583e0) at 
> > ../util/qemu-thread-posix.c:393
> > #26 0x00007ffff74d2aa4 in start_thread (arg=<optimized out>) at 
> > ./nptl/pthread_create.c:447
> > #27 0x00007ffff755fc6c in clone3 () at 
> > ../sysdeps/unix/sysv/linux/x86_64/clone3.S:78  
> 
> 
> 

Attachment: pgp8qfwmEU1yr.pgp
Description: OpenPGP digital signature

Reply via email to