"Dr. David Alan Gilbert (git)" <[email protected]> wrote: > From: "Dr. David Alan Gilbert" <[email protected]> > > When using hugepages, rate limiting is necessary within each huge > page, since a 1G huge page can take a significant time to send, so > you end up with bursty behaviour. > > Fixes: 4c011c37ecb3 ("postcopy: Send whole huge pages") > Reported-by: Lin Ma <[email protected]> > Signed-off-by: Dr. David Alan Gilbert <[email protected]> > ---
Reviewed-by: Juan Quintela <[email protected]> I can agree that rate limit needs to be done for huge pages. > diff --git a/migration/ram.c b/migration/ram.c > index a4ae3b3120..a9177c6a24 100644 > --- a/migration/ram.c > +++ b/migration/ram.c > @@ -2616,6 +2616,8 @@ static int ram_save_host_page(RAMState *rs, > PageSearchStatus *pss, > > pages += tmppages; > pss->page++; > + /* Allow rate limiting to happen in the middle of huge pages */ > + migration_rate_limit(); > } while ((pss->page & (pagesize_bits - 1)) && > offset_in_ramblock(pss->block, pss->page << TARGET_PAGE_BITS)); > But is doing the rate limit for each page, no? Even when not using huge pages. Not that it should be a big issue (performance wise). Have you done any meassuremnet? Later, Juan.
