* Rao, Lei ([email protected]) wrote:
>
> -----Original Message-----
> From: Dr. David Alan Gilbert <[email protected]>
> Sent: Friday, March 26, 2021 2:08 AM
> To: Rao, Lei <[email protected]>
> Cc: Zhang, Chen <[email protected]>; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]
> Subject: Re: [PATCH v4 09/10] Add the function of colo_bitmap_clear_diry.
>
> * leirao ([email protected]) wrote:
> > From: "Rao, Lei" <[email protected]>
> >
> > When we use continuous dirty memory copy for flushing ram cache on
> > secondary VM, we can also clean up the bitmap of contiguous dirty page
> > memory. This also can reduce the VM stop time during checkpoint.
> >
> > Signed-off-by: Lei Rao <[email protected]>
> > ---
> > migration/ram.c | 29 +++++++++++++++++++++++++----
> > 1 file changed, 25 insertions(+), 4 deletions(-)
> >
> > diff --git a/migration/ram.c b/migration/ram.c index a258466..ae1e659
> > 100644
> > --- a/migration/ram.c
> > +++ b/migration/ram.c
> > @@ -855,6 +855,30 @@ unsigned long colo_bitmap_find_dirty(RAMState *rs,
> > RAMBlock *rb,
> > return first;
> > }
> >
> > +/**
> > + * colo_bitmap_clear_dirty:when we flush ram cache to ram, we will
> > +use
> > + * continuous memory copy, so we can also clean up the bitmap of
> > +contiguous
> > + * dirty memory.
> > + */
> > +static inline bool colo_bitmap_clear_dirty(RAMState *rs,
> > + RAMBlock *rb,
> > + unsigned long start,
> > + unsigned long num) {
> > + bool ret;
> > + unsigned long i = 0;
> > +
> > + qemu_mutex_lock(&rs->bitmap_mutex);
>
> Please use QEMU_LOCK_GUARD(&rs->bitmap_mutex);
>
> Will be changed in V5. Thanks.
>
> > + for (i = 0; i < num; i++) {
> > + ret = test_and_clear_bit(start + i, rb->bmap);
> > + if (ret) {
> > + rs->migration_dirty_pages--;
> > + }
> > + }
> > + qemu_mutex_unlock(&rs->bitmap_mutex);
> > + return ret;
>
> This implementation is missing the clear_bmap code that
> migration_bitmap_clear_dirty has.
> I think that's necessary now.
>
> Are we sure there's any benefit in this?
>
> Dave
>
> There is such a note about clear_bmap in struct RAMBlock:
> "On destination side, this should always be NULL, and the variable
> `clear_bmap_shift' is meaningless."
> This means that clear_bmap is always NULL on secondary VM. And for the
> behavior of flush ram cache to ram, we will always only happen on secondary
> VM.
> So, I think the clear_bmap code is unnecessary for COLO.
Ah yes; can you add a comment there to note this is on the secondary
to make that clear.
> As for the benefits, When the number of dirty pages from flush ram cache to
> ram is too much. it will reduce the number of locks acquired.
It might be good to measure the benefit.
Dave
> Lei
>
> > +}
> > +
> > static inline bool migration_bitmap_clear_dirty(RAMState *rs,
> > RAMBlock *rb,
> > unsigned long page)
> > @@ -3700,7 +3724,6 @@ void colo_flush_ram_cache(void)
> > void *src_host;
> > unsigned long offset = 0;
> > unsigned long num = 0;
> > - unsigned long i = 0;
> >
> > memory_global_dirty_log_sync();
> > WITH_RCU_READ_LOCK_GUARD() {
> > @@ -3722,9 +3745,7 @@ void colo_flush_ram_cache(void)
> > num = 0;
> > block = QLIST_NEXT_RCU(block, next);
> > } else {
> > - for (i = 0; i < num; i++) {
> > - migration_bitmap_clear_dirty(ram_state, block, offset
> > + i);
> > - }
> > + colo_bitmap_clear_dirty(ram_state, block, offset,
> > + num);
> > dst_host = block->host
> > + (((ram_addr_t)offset) << TARGET_PAGE_BITS);
> > src_host = block->colo_cache
> > --
> > 1.8.3.1
> >
> --
> Dr. David Alan Gilbert / [email protected] / Manchester, UK
>
--
Dr. David Alan Gilbert / [email protected] / Manchester, UK