> -Original Message-
> From: Michael S. Tsirkin [mailto:m...@redhat.com]
> Sent: Tuesday, April 09, 2019 11:04 PM
> To: Zhuangyanying
> Cc: marcel.apfelb...@gmail.com; qemu-devel@nongnu.org; Gonglei (Arei)
>
> Subject: Re: [PATCH] msix: fix interrupt aggre
From: Zhuang Yanying
Recently I tested the performance of NVMe SSD passthrough and found that
interrupts
were aggregated on vcpu0(or the first vcpu of each numa) by
/proc/interrupts,when
GuestOS was upgraded to sles12sp3 (or redhat7.6). But
/proc/irq/X/smp_affinity_list
shows that the interrup
From: Xiao Guangrong
The original idea is from Avi. kvm_mmu_write_protect_all_pages() is
extremely fast to write protect all the guest memory. Comparing with
the ordinary algorithm which write protects last level sptes based on
the rmap one by one, it just simply updates the generation number to
From: Zhuang Yanying
When live-migration with large-memory guests, vcpu may hang for a long
time while starting migration, such as 9s for 2T
(linux-5.0.0-rc2+qemu-3.1.0).
The reason is memory_global_dirty_log_start() taking too long, and the
vcpu is waiting for BQL. The page-by-page D bit clearup
From: Xiao Guangrong
It is used to track possible writable sptes on the shadow page on
which the bit is set to 1 for the sptes that are already writable
or can be locklessly updated to writable on the fast_page_fault
path, also a counter for the number of possible writable sptes is
introduced to
From: Zhuang yanying
When live-migration with large-memory guests, vcpu may hang for a long
time while starting migration, such as 9s for 2T
(linux-5.0.0-rc2+qemu-3.1.0).
The reason is memory_global_dirty_log_start() taking too long, and the
vcpu is waiting for BQL. The page-by-page D bit clearup
> -Original Message-
> From: Sean Christopherson [mailto:sean.j.christopher...@intel.com]
> Sent: Tuesday, January 22, 2019 11:17 PM
> To: Zhuangyanying
> Cc: xiaoguangr...@tencent.com; pbonz...@redhat.com; Gonglei (Arei)
> ; qemu-devel@nongnu.org; k...@vger.kernel
> -Original Message-
> From: Sean Christopherson [mailto:sean.j.christopher...@intel.com]
> Sent: Friday, January 18, 2019 12:32 AM
> To: Zhuangyanying
> Cc: xiaoguangr...@tencent.com; pbonz...@redhat.com; Gonglei (Arei)
> ; qemu-devel@nongnu.org; k...@vger.kernel
From: Zhuang Yanying
When live-migration with large-memory guests, vcpu may hang for a long
time while starting migration, such as 9s for 2T
(linux-5.0.0-rc2+qemu-3.1.0).
The reason is memory_global_dirty_log_start() taking too long, and the
vcpu is waiting for BQL. The page-by-page D bit clearup
From: Xiao Guangrong
It is used to track possible writable sptes on the shadow page on
which the bit is set to 1 for the sptes that are already writable
or can be locklessly updated to writable on the fast_page_fault
path, also a counter for the number of possible writable sptes is
introduced to
From: Xiao Guangrong
Current behavior of mmu_spte_update_no_track() does not match
the name of _no_track() as actually the A/D bits are tracked
and returned to the caller
This patch introduces the real _no_track() function to update
the spte regardless of A/D bits and rename the original functio
From: Xiao Guangrong
The original idea is from Avi. kvm_mmu_write_protect_all_pages() is
extremely fast to write protect all the guest memory. Comparing with
the ordinary algorithm which write protects last level sptes based on
the rmap one by one, it just simply updates the generation number to
From: Zhuang Yanying
Recently I tested live-migration with large-memory guests, find vcpu may hang
for a long time while starting migration, such as 9s for
2048G(linux-5.0.0-rc2+qemu-3.1.0).
The reason is memory_global_dirty_log_start() taking too long, and the vcpu is
waiting for BQL. The pag
From: Zhuang Yanying
Recently I tested live-migration with large-memory guests, find vcpu may hang
for a long time while starting migration, such as 9s for
2048G(linux-4.20.1+qemu-3.1.0).
The reason is memory_global_dirty_log_start() taking too long, and the vcpu is
waiting for BQL. The page-b
From: Zhuang Yanying
Hi,
Recently I test live-migration vm with 1T memory,
find vcpu may hang for up to 4s while starting migration.
The reason is memory_global_dirty_log_start taking too long, and the vcpu is
waiting for BQL.
migrate threadvcpu
From: ZhuangYanying
When spin_lock_irqsave() deadlock occurs inside the guest, vcpu threads,
other than the lock-holding one, would enter into S state because of
pvspinlock. Then inject NMI via libvirt API "inject-nmi", the NMI could
not be injected into vm.
The reason is:
1 It sets
From: ZhuangYanying
When spin_lock_irqsave() deadlock occurs inside the guest, vcpu threads,
other than the lock-holding one, would enter into S state because of
pvspinlock. Then inject NMI via libvirt API "inject-nmi", the NMI could
not be injected into vm.
The reason is:
1 It sets
> -Original Message-
> From: Radim Krčmář [mailto:rkrc...@redhat.com]
> Sent: Wednesday, May 24, 2017 10:34 PM
> To: Zhuangyanying
> Cc: pbonz...@redhat.com; Herongguang (Stephen); qemu-devel@nongnu.org;
> Gonglei (Arei); Zhangbo (Oscar); k...@vger.kernel.org
> Sub
From: ZhuangYanying
Recently I found NMI could not be injected to vm via libvirt API
Reproduce the problem:
1 use guest of redhat 7.3
2 disable nmi_watchdog and trig spinlock deadlock inside the guest
check the running vcpu thread, make sure not vcpu0
3 inject NMI into the guest via libvirt API
> -Original Message-
> From: Daniel P. Berrange [mailto:berra...@redhat.com]
> Sent: Monday, April 24, 2017 6:34 PM
> To: Dr. David Alan Gilbert
> Cc: Zhuangyanying; Zhanghailiang; wangxin (U); qemu-devel@nongnu.org;
> Gonglei (Arei); Huangzhichao; pbonz...@redhat
Hi all,
Recently, I found migration failed when enable vPMU.
migrate vPMU state was introduced in linux-3.10 + qemu-1.7.
As long as enable vPMU, qemu will save / load the
vmstate_msr_architectural_pmu(msr_global_ctrl) register during the migration.
But global_ctrl generated based on cpuid(0xA),
From: ZhuangYanying
Qemu crash in the source side while migrating, after starting ipmi service
inside vm.
./x86_64-softmmu/qemu-system-x86_64 --enable-kvm -smp 4 -m 4096 \
-drive
file=/work/suse/suse11_sp3_64_vt,format=raw,if=none,id=drive-virtio-disk0,cache=none
\
-device
virtio-blk-pci
From: Zhuang Yanying
Device ivshmem property use64=0 is designed to make the device
expose a 32 bit shared memory BAR instead of 64 bit one. The
default is a 64 bit BAR, except pc-1.2 and older retain a 32 bit
BAR. A 32 bit BAR can support only up to 1 GiB of shared memory.
From: ZhuangYanying
After commit 5400c02, ivshmem_64bit renamed to not_legacy_32bit,
and changed the implementation of this property.
Then use64 = 1, ~PCI_BASE_ADDRESS_MEM_TYPE_64 (default for ivshmem),
the actual use is the legacy model,
can not support greater than or equal 1G mapping,
which
From: ZhuangYanying
Recently, I tested ivshmem, found that use64, that is not_legacy_32bit
implementation is odd, or even the opposite.
Previous use64 = ivshmem_64bit = 1, then attr |= PCI_BASE_ADDRESS_MEM_TYPE_64,
ivshmem support 1G and above packaged into bar2, presented to the virtual
From: ZhuangYanying
Signed-off-by: Zhuang Yanying
---
hw/misc/ivshmem.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/hw/misc/ivshmem.c b/hw/misc/ivshmem.c
index b897685..abeaf3d 100644
--- a/hw/misc/ivshmem.c
+++ b/hw/misc/ivshmem.c
@@ -1045,6 +1045,7 @@ static void ivshmem_plain_init
From: ZhuangYanying
After "ivshmem: Split ivshmem-plain, ivshmem-doorbell off ivshmem",
ivshmem_64bit renamed to not_legacy_32bit, and changed the implementation of
this property.
Then use64 = not_legacy_32bit = 1, then PCI attribute configuration ~
PCI_BASE_ADDRESS_MEM_TYPE_64 (d
From: ZhuangYanying
Hyper-V HV_X64_MSR_VP_RUNTIME was introduced in linux-4.4 + qemu-2.5.
As long as the KVM module supports, qemu will save / load the
vmstate_msr_hyperv_runtime register during the migration.
Regardless of whether the hyperv_runtime configuration of x86_cpu_properties is
28 matches
Mail list logo