zhanghailiang <[email protected]> wrote:
> We should not load PVM's state directly into SVM, because there maybe some
> errors happen when SVM is receving data, which will break SVM.
>
> We need to ensure receving all data before load the state into SVM. We use
> an extra memory to cache these data (PVM's ram). The ram cache in secondary
> side
> is initially the same as SVM/PVM's memory. And in the process of checkpoint,
> we cache the dirty pages of PVM into this ram cache firstly, so this ram cache
> always the same as PVM's memory at every checkpoint, then we flush this
> cached ram
> to SVM after we receive all PVM's state.
>
> Cc: Dr. David Alan Gilbert <[email protected]>
> Signed-off-by: zhanghailiang <[email protected]>
> Signed-off-by: Li Zhijian <[email protected]>
> ---
> v2:
> - Move colo_init_ram_cache() and colo_release_ram_cache() out of
> incoming thread since both of them need the global lock, if we keep
> colo_release_ram_cache() in incoming thread, there are potential
> dead-lock.
> - Remove bool ram_cache_enable flag, use migration_incoming_in_state()
> instead.
> - Remove the Reviewd-by tag because of the above changes.
> +out_locked:
> + QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
> + if (block->colo_cache) {
> + qemu_anon_ram_free(block->colo_cache, block->used_length);
> + block->colo_cache = NULL;
> + }
> + }
> +
> + rcu_read_unlock();
> + return -errno;
> +}
> +
> +/* It is need to hold the global lock to call this helper */
> +void colo_release_ram_cache(void)
> +{
> + RAMBlock *block;
> +
> + rcu_read_lock();
> + QLIST_FOREACH_RCU(block, &ram_list.blocks, next) {
> + if (block->colo_cache) {
> + qemu_anon_ram_free(block->colo_cache, block->used_length);
> + block->colo_cache = NULL;
> + }
> + }
> + rcu_read_unlock();
> +}
Create a function from the creation/removal? We have exactly two copies
of the same code. Right now the code inside the function is very small,
but it could be bigger, no?
Later, Juan.