* Peter Maydell ([email protected]) wrote:
> abort/assert doesn't print a backtrace.
>
> I added some OSX backtrace-gathering/printing functions to the errorpath,
> and got this:
>
> 0 qemu-system-x86_64 0x000000010c66d203 qemu_mutex_lock +
> 83
> 1 qemu-system-x86_64 0x000000010c2ac7af unqueue_page + 47
OK, it is the mutex that I added in that patch.
> 2 qemu-system-x86_64 0x000000010c2ac386 get_queued_page +
> 54
> 3 qemu-system-x86_64 0x000000010c2ac135
> ram_find_and_save_block + 165
> 4 qemu-system-x86_64 0x000000010c2ab5a2
> ram_save_iterate + 130
> 5 qemu-system-x86_64 0x000000010c2afa2e
> qemu_savevm_state_iterate + 302
> 6 qemu-system-x86_64 0x000000010c53acbb
> migration_thread + 571
> 7 libsystem_pthread.dylib 0x00007fff9146c05a _pthread_body + 131
> 8 libsystem_pthread.dylib 0x00007fff9146bfd7 _pthread_body + 0
> 9 libsystem_pthread.dylib 0x00007fff914693ed thread_start + 13
>
>
> >
> > Could you also add:
> >
> > diff --git a/migration/migration.c b/migration/migration.c
> > index 9bd2ce7..85e5766 100644
> > --- a/migration/migration.c
> > +++ b/migration/migration.c
> > @@ -93,6 +93,7 @@ MigrationState *migrate_get_current(void)
> > };
> >
> > if (!once) {
> > + fprintf(stderr,"migrate_get_current do init of current_migration
> > %d\n", getpid());
> > qemu_mutex_init(¤t_migration.src_page_req_mutex);
> > once = true;
> > }
> > diff --git a/migration/ram.c b/migration/ram.c
> > index 4266687..72b46f2 100644
> > --- a/migration/ram.c
> > +++ b/migration/ram.c
> > @@ -1036,6 +1036,7 @@ static RAMBlock *unqueue_page(MigrationState *ms,
> > ram_addr_t *offset,
> > {
> > RAMBlock *block = NULL;
> >
> > + fprintf(stderr,"unqueue_page %d\n", getpid());
> > qemu_mutex_lock(&ms->src_page_req_mutex);
> > if (!QSIMPLEQ_EMPTY(&ms->src_page_requests)) {
> > struct MigrationSrcPageRequest *entry =
> >
> >
> > and make sure that the init happens before the first unqueue (you'll get
> > loads of calls to unqueue).
>
> With that change, plus the backtracing:
>
> /x86_64/ahci/flush/retry: OK
> /x86_64/ahci/flush/migrate: migrate_get_current do init of
> current_migration 60427
> migrate_get_current do init of current_migration 60428
> unqueue_page 60427
> 0 qemu-system-x86_64 0x0000000101a751c3 qemu_mutex_lock +
> 83
> 1 qemu-system-x86_64 0x00000001016b4749 unqueue_page + 89
OK, so the lock I added is apparently being initialised before it's being
locked,
so that's good.
OK, can you try a simple migration by hand outside of the test harness;
just something simple like:
./bin/qemu-system-x86_64 -M pc -nographic
(qemu) migrate "exec: cat > /dev/null"
and the same with q35 ?
Dave
> 2 qemu-system-x86_64 0x00000001016b42f6 get_queued_page +
> 54
> 3 qemu-system-x86_64 0x00000001016b40a5
> ram_find_and_save_block + 165
> 4 qemu-system-x86_64 0x00000001016b3512
> ram_save_iterate + 130
> 5 qemu-system-x86_64 0x00000001016b79be
> qemu_savevm_state_iterate + 302
> 6 qemu-system-x86_64 0x0000000101942c7b
> migration_thread + 571
> 7 libsystem_pthread.dylib 0x00007fff9146c05a _pthread_body + 131
> 8 libsystem_pthread.dylib 0x00007fff9146bfd7 _pthread_body + 0
> 9 libsystem_pthread.dylib 0x00007fff914693ed thread_start + 13
> qemu: qemu_mutex_lock: Invalid argument
> qemu-system-x86_64:Broken pipe
> Not a migration stream
> qemu-system-x86_64: load of migration failed: Invalid argument
>
> thanks
> -- PMM
--
Dr. David Alan Gilbert / [email protected] / Manchester, UK