The 'ram_list.blocks' modifications protected by 'ram_list.mutex'.
last_ram_page() uses state of 'ram_list.blocks' to identify ram's size.
ram_block_add() calls last_ram_page() before the mutex lock
making the following race possible:
CPU#0 CPU#1
ram_block_add()
old_ram_size = last_ram_page()
qemu_mutex_lock_ramlist()
...
dirty_memory_extend(old_ram_size,
new_ram_size);
ram_block_add()
old_ram_size = last_ram_page()
//insert block to ram_list
QLIST_INSERT_*_RCU()
qemu_mutex_unlock_ramlist()
qemu_mutex_lock_ramlist()
....
dirty_memory_extend(old_ram_size, new_ram_size);
Such race may result in leaking some dirty memory bitmaps.
Because of stale 'old_ram_size' value, the dirty_memory_extend() on CPU#0
will allocate and reinitialize some of the already allocated on CPU#1
dirty memory bitmap blocks.
Fix this by moving last_ram_page() call under the qemu_mutex_lock_ramlist()
Cc: [email protected]
Signed-off-by: Andrey Ryabinin <[email protected]>
---
softmmu/physmem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/softmmu/physmem.c b/softmmu/physmem.c
index 4e1b27a20e..32f76362bf 100644
--- a/softmmu/physmem.c
+++ b/softmmu/physmem.c
@@ -1969,9 +1969,9 @@ static void ram_block_add(RAMBlock *new_block, Error
**errp)
ram_addr_t old_ram_size, new_ram_size;
Error *err = NULL;
+ qemu_mutex_lock_ramlist();
old_ram_size = last_ram_page();
- qemu_mutex_lock_ramlist();
new_block->offset = find_ram_offset(new_block->max_length);
if (!new_block->host) {
--
2.34.1