Hi, > -----Original Message----- > From: Paolo Bonzini [mailto:[email protected]] > Sent: Friday, May 09, 2014 5:54 PM > To: Gonglei (Arei); [email protected] > Cc: [email protected]; Herongguang (Stephen); Huangweidong (C) > Subject: Re: [RFC] vhost: Can we change synchronize_rcu to call_rcu in > vhost_set_memory() in vhost kernel module? > > Il 09/05/2014 11:04, Gonglei (Arei) ha scritto: > >> > Yes, for example enabling/disabling PCI BARs would have that effect. > >> > > > Yes, but PCI BARs are mapped in PCI hole, and they are not overlapped with > ram > > memory regions, so disable or enable PCI BARs would not change ram MRs' > mapping. > > PCI BARs can be RAM (one special case being the ROM BAR). > > Paolo > Many thanks for your explanation.
We now found that migration downtime is primarily consumed in KVM_SET_GSI_ROUTING and VHOST_SET_MEM_TABLE IOCTLs, and internally this is stuck in synchronize_rcu(). Besides live migration, setting MSI IRQ CPU affinity in VM also triggers KVM_SET_GSI_ROUTING IOCTL in QEMU. >From previous discussion: https://lists.gnu.org/archive/html/qemu-devel/2014-03/msg04925.html we know that you are going to replace RCU in KVM_SET_GSI_ROUTING with SRCU. Though SRCU is quite better than originally RCU, in our test case this cannot satisfy our needs. Our VMs work in telecom scenario, VMs report CPU and memory usage to balance node each second, and balance node dispatch works to different VMs according to VM load. Since this balance needs high accuracy, IRQ affinity settings in VM also need high accuracy, so we balance IRQ affinity in every 0.5s. So for telecom scenario, KVM_SET_GSI_ROUTING IOCTL needs much optimization. And in live migration case, VHOST_SET_MEM_TABLE needs attention. We tried to change synchronize_rcu() to call_rcu() with rate limit, but rate limit is not easy to configure. Do you have better ideas to achieve this? Thanks. Cc'ing Avi & Gleb for more insight. Best regards, -Gonglei
