Instead of locking iothread, we can just swap these calls. So, if some
write to our range occures before resetting the bitmap, then it will
get into subsequent aio read, becouse it occures, in any case, after
resetting the bitmap.

Signed-off-by: Vladimir Sementsov-Ogievskiy <[email protected]>
---
 block-migration.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/block-migration.c b/block-migration.c
index d0c825f..908a66d 100644
--- a/block-migration.c
+++ b/block-migration.c
@@ -315,13 +315,11 @@ static int mig_save_device_bulk(QEMUFile *f, 
BlkMigDevState *bmds)
     block_mig_state.submitted++;
     blk_mig_unlock();
 
-    qemu_mutex_lock_iothread();
+    bdrv_reset_dirty_bitmap(bs, bmds->dirty_bitmap, cur_sector, nr_sectors);
+
     blk->aiocb = bdrv_aio_readv(bs, cur_sector, &blk->qiov,
                                 nr_sectors, blk_mig_read_cb, blk);
 
-    bdrv_reset_dirty_bitmap(bs, bmds->dirty_bitmap, cur_sector, nr_sectors);
-    qemu_mutex_unlock_iothread();
-
     bmds->cur_sector = cur_sector + nr_sectors;
     return (bmds->cur_sector >= total_sectors);
 }
-- 
1.9.1


Reply via email to