"Dr. David Alan Gilbert" <dgilb...@redhat.com> wrote: > * Juan Quintela (quint...@redhat.com) wrote: >> The function still don't use multifd, but we have simplified >> ram_save_page, xbzrle and RDMA stuff is gone. We have added a new >> counter and a new flag for this type of pages.
>> +static int ram_multifd_page(QEMUFile *f, PageSearchStatus *pss, >> + bool last_stage, uint64_t *bytes_transferred) >> +{ >> + int pages; >> + uint8_t *p; >> + RAMBlock *block = pss->block; >> + ram_addr_t offset = pss->offset; >> + >> + p = block->host + offset; >> + >> + if (block == last_sent_block) { >> + offset |= RAM_SAVE_FLAG_CONTINUE; >> + } >> + pages = save_zero_page(f, block, offset, p, bytes_transferred); >> + if (pages == -1) { >> + *bytes_transferred += >> + save_page_header(f, block, offset | RAM_SAVE_FLAG_MULTIFD_PAGE); >> + qemu_put_buffer(f, p, TARGET_PAGE_SIZE); >> + *bytes_transferred += TARGET_PAGE_SIZE; >> + pages = 1; >> + acct_info.norm_pages++; >> + acct_info.multifd_pages++; >> + } >> + >> + return pages; >> +} >> + >> static int do_compress_ram_page(QEMUFile *f, RAMBlock *block, >> ram_addr_t offset) >> { >> @@ -1427,6 +1461,8 @@ static int ram_save_target_page(MigrationState *ms, >> QEMUFile *f, >> res = ram_save_compressed_page(f, pss, >> last_stage, >> bytes_transferred); >> + } else if (migrate_use_multifd()) { >> + res = ram_multifd_page(f, pss, last_stage, bytes_transferred); > > I'm curious whether it's best to pick the destination fd at this level or one > level > higher; for example would it be good to keep all the components of a > host page or huge > page together on the same fd? If so then it would be best to pick the fd > at ram_save_host_page level. my plan here would be to change the migration code to be able to call with a bigger sizes, not page by page, and then the problem is solved by itself? Later, Juan.