QEMU 1.7 was released, Quantal has 10ish days left of support, and
Raring is EOL
** Changed in: qemu
Status: Fix Committed => Fix Released
** Changed in: qemu-kvm (Ubuntu Quantal)
Status: Triaged => Invalid
** Changed in: qemu-kvm (Ubuntu Raring)
Status: Triaged => Invalid
Fix will be part of QEMU 1.7.0 (commit fc1c4a5, migration: drop
MADVISE_DONT_NEED for incoming zero pages, 2013-10-24).
** Changed in: qemu
Status: New => Fix Committed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://b
** Changed in: qemu-kvm (Ubuntu Quantal)
Assignee: Chris J Arges (arges) => (unassigned)
** Changed in: qemu-kvm (Ubuntu Raring)
Assignee: Chris J Arges (arges) => (unassigned)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
This bug was fixed in the package qemu-kvm - 1.0+noroms-0ubuntu14.12
---
qemu-kvm (1.0+noroms-0ubuntu14.12) precise-proposed; urgency=low
* migration-do-not-overwrite-zero-pages.patch,
call-madv-hugepage-for-guest-ram-allocations.patch:
Fix performance degradation after migr
I have verified this on my local machine using virt-manager's save
memory, savevm/loadvm via the qemu monitor , and migrate via qemu
monitor.
** Tags removed: verification-needed
** Tags added: verification-done
--
You received this bug notification because you are a member of Ubuntu
Bugs, which
Hello Mark, or anyone else affected,
Accepted qemu-kvm into precise-proposed. The package will build now and
be available at http://launchpad.net/ubuntu/+source/qemu-kvm/1.0+noroms-
0ubuntu14.12 in a few hours, and then in the -proposed repository.
Please help us by testing this new package. See
** Description changed:
SRU Justification
[Impact]
* Users of QEMU that save their memory states using savevm/loadvm or migrate
experience worse performance after the migration/loadvm. To workaround these
issues VMs must be completely rebooted. Optimally we should be able to restore
a VM
** Description changed:
SRU Justification
- [Impact]
- * Users of QEMU that save their memory states using savevm/loadvm or migrate
experience worse performance after the migration/loadvm. To workaround these
issues VMs must be completely rebooted. Optimally we should be able to restore
a V
** Description changed:
+ SRU Justification
+ [Impact]
+ * Users of QEMU that save their memory states using savevm/loadvm or migrate
experience worse performance after the migration/loadvm. To workaround these
issues VMs must be completely rebooted. Optimally we should be able to restore
a V
I found that two patches need to be backported to solve this issue:
ad0b5321f1f797274603ebbe20108b0750baee94
211ea74022f51164a7729030b28eec90b6c99a08
I've added the necessary bits into precise and tried a few tests:
1) Measure performance before and after savevm/loadvm.
2) Measure performance bef
>From my testing this has been fixed in the saucy version (1.5.0) of qemu. It
>is fixed by this patch:
f1c72795af573b24a7da5eb52375c9aba8a37972
However later in the history this commit was reverted, and again broke this.
The other commit that fixes this is:
211ea74022f51164a7729030b28eec90b6c99a
** Changed in: qemu-kvm (Ubuntu)
Status: Triaged => In Progress
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
To manage notifications
** Changed in: qemu-kvm (Ubuntu)
Assignee: (unassigned) => Chris J Arges (arges)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
To manage
This is being looked at in an upstream thread at
http://lists.gnu.org/archive/html/qemu-devel/2013-07/msg01850.html
Cheers,
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migrati
We are reliably seeing this post live-migration on an openstack
platform.
Setup:
hypervisor ==> Ubuntu 12.04.3 LTS
libvirt ===> 1.0.2-0ubuntu11.13.04.2~cloud0
qemu-kvm ===> 1.0+noroms-0ubuntu14.10
storage: NFS exports
Guest VM OS: Ubuntu 12.04.1 LTS and CentOS 6.4
We have ept enabled.
Sample ins
My HyperDex cluster nodes performance dropped significantly after migrating
them (virsh migrate --live ...).they are hosted on precise KVM (12.04.2 Precise
Pangolin). first Google search result landed me on this page. it seems i'm not
the only one who's encountering this problem. I hope this
@Paolo yes, when i was doing that testing i was able to consistently
reproduce those results in #23, but it was a red herring, as of now i
cannot reproduce the results in #23 consistently (i suspect it may have
had something to do with the order i was executing tests but didn’t
chase it any furthe
Oops, I missed Chris's comment #28. Thanks.
>From comment #23, the 1.4 machine type seems to be "fast", while 1.3 is
slow. This doesn't make much sense, given the differences between the
two machine types:
enable_compat_apic_id_mode();
.driver = "usb-tablet",\
.prop
Can you please check if you have EPT enabled? This could be
https://bugzilla.kernel.org/show_bug.cgi?id=58771
** Bug watch added: Linux Kernel Bug Tracker #58771
http://bugzilla.kernel.org/show_bug.cgi?id=58771
--
You received this bug notification because you are a member of Ubuntu
Bugs, wh
** Changed in: linux (Ubuntu)
Status: Incomplete => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
To manage notifications ab
** Also affects: linux (Ubuntu)
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
To manage notificat
Update:
>From our testing this bug affects KVM Hypervisors on Intel processors
that have the EPT feature enabled with Kernels 3.0 and greater. A list
of Intel EPT supported CPUs here
(http://ark.intel.com/Products/VirtualizationTechnology).
When using a KVM Hypervisor Host with Linux kernel 3.0 o
I used this handy tool to run system call preliminary benchmarks:
http://code.google.com/p/byte-unixbench/
In a nutshell, what I found is a confirmation that live migration does indeed
degrade performance on precise KVM.
I hope the below results help narrow down this critical problem to event
I have a few VMs (precise) that process high-volume transaction jobs
each night. After I've done a live-migrate operation to replace faulty
power supply on a bare-metal server, we encountered sluggish performance
on the migrated VMs, significant higher CPU is recorded in particular,
where the same
Can you clarify what's not 100% reproducible? The only time that it is
not reproducible on my system is between different qemu machine types as
I listed. If tests are performed on same machine-type they are
reproducible 100% of the time on the same host and vm guest as shown in
comment #23.
I hav
The results of comment 23 suggest that the issue is not 100%
reproducible. Can you please run the benchmark 3-4 times
(presave/postrestore) and showall 4 results? One benchmark only, e.g.
"simple read" will do.
Also please try putting a big file on disk (something like "dd
if=/dev/zero of=bigfile
** Also affects: qemu
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
To manage notifications about
I did some tests using Raring Server Beta 2. There are some interesting
results for this test the results are mixing. Using different machine
types produces different results. At this time i've only ran these
simple "lat_syscall" tests from lmbench and haven't run some of the more
exhaustive benchm
Quoting C Cormier (1100...@bugs.launchpad.net):
> Could you confirm that your .1 tests were on a freshly booted Guest OS?
Yup, it was a fresh boot.
Since the vms are on shared storage, do you have a box you could wire
up running raring, to test more recent qemu?
--
You received this bug notific
Could you confirm that your .1 tests were on a freshly booted Guest OS?
Our hardware likely different... but your latencies are close to my Post
Restore times.
I just reproded with 1GB RAM and Single CPU.
-Pre Save-
Simple syscall: 0.0519 microseconds
Simple read: 0.1356 microseconds
Simple write
Can you reproduce this with 1 or 2 cores and 1G ram?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration Causes Performance Issues
To manage notifications about this bug go
Quoting Chris Cormier (ccorm...@gmail.com):
> @serge-hallyn
> I'm using Netapp filer in c-mode for NFS storage, mount options are:
> (rw,nosuid,nodev,noatime,hard,nfsvers=3,tcp,intr,rsize=32768,wsize=32768,addr=x.x.x.x).
>
> However, I can reproduce this on a a host with or without NFS, using
> l
@serge-hallyn
I'm using Netapp filer in c-mode for NFS storage, mount options are:
(rw,nosuid,nodev,noatime,hard,nfsvers=3,tcp,intr,rsize=32768,wsize=32768,addr=x.x.x.x).
However, I can reproduce this on a a host with or without NFS, using
local disk, qcow2, or raw images and the OP was using FC
I guess the final thing to test is virsh save/restore with nfs backend.
Can you tell me the configuration of the nfs server?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migrat
I couldn't reproduce this with a vm whose xml includes:
524288
524288
1
hvm
destroy
restart
restart
/usr/bin/kvm
I started the vm, ran 6 kernel compilations, did 'virsh save/virsh
restore', then did 6
So far I've run the equivalentn of test 1 in comment #14, and also didn't find
any
performance degradation. I left the host crunching a few other VMs for several
hours, but performance on a kernel compilation stayed the same.
--
You received this bug notification because you are a member of Ubu
** Changed in: qemu-kvm (Ubuntu)
Importance: Medium => High
** Changed in: qemu-kvm (Ubuntu)
Status: Confirmed => Triaged
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1100843
Title:
Live
Few updates from a few tests we ran:
Setup:
hypervisor ==> Ubuntu 12.04.2 LTS
libvirt ===> 0.9.8-2ubuntu17.7
qemu-kvm ===> 1.0+noroms-0ubuntu14.7
storage: NFS exports
Guest VM OS: Ubuntu 12.04.1 LTS
Test 1: Created a new VM and kept it Idle for 60+ hours, then ran Unix
benchmark test against it
@ccormier
I've thought all along it might be a libc issue but testing libc 2.13 on
precise would be rather difficult. To some extent I feel like this
rules out the kernel as an issue though, since the same kernel on
precise/lucid yield different results.
Have you tried letting a precise VM idle
I've been able to confirm the same as the OP regarding the different
Ubuntu distrubutions as guests. These tests should help ruleout/pinpoint
the kernel and modules.
Using the same 12.04 hypervisors for all tests.
Testing different guests I was able to determine.
-Lucid with default kernel is NOT
Can anyone confirm if you similar slowdowns if you leave the VM running
for a few days? I thought it was related to live migration, but I saw
my performance degrade if the VM/physical host was up and idle for a
couple days.
--
You received this bug notification because you are a member of Ubuntu
We are also getting reports from some OpenNebula users with this very
same issue. Is there any extra information about about what is causing
the slowdown or a fix?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.
Same thing happened with us with running ubuntu precise kvm for our
hypervisors. VMs that are live-migrated suffer noticeable performance
degradation. We tried a few performance tests against live-migrated vs
non-migrated VMs and the problem is easily reproducible.. (we used
jmeter)
Ubuntu (hyper
Launchpad had previously marked this confirmed for affecting several
users. I'm curious who else has seen this behavior, and under what
circumstances?
** Changed in: qemu-kvm (Ubuntu)
Status: Incomplete => New
** Changed in: qemu-kvm (Ubuntu)
Status: New => Confirmed
--
You recei
There's nothing in syslog for the VM or host that would imply
performance degradation.
I have done this with hugepages and made sure huge page use was
consistent. Previously I disabled hugepages and didn't see a difference
but I haven't tested again.
I'm using (C)LVM back off FCoE/SAN but I have
(marking incomplete pending response. In addition to retitling, I think
the bug should also be targeted to project QEMU and qemu Ubuntu source
package)
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/11
It doesn't sound like this bug should be removed, rather re-titled.
Are there any messages in syslog about expecting performance
degradation? Have you tried to reproduce this with and without
hugepages?
Can you reproduce the same thing with a simple local raw file or LVM
backend?
Can you give t
I tested with qemu-kvm 1.3.0. It seems that the issue still exists, but
that it exists without a live migration if you wait long enough.
That is if you start a VM on one node and run phoronix batch-run
pts/compilation, wait 4 hours (with the VM and physical host doing
nothing else) an re-run the
I don't see qemu-kvm 1.3.0 yet. Will test when you get it pushed,
hopefully Tuesday (01/22/2013) if you've pushed by then.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1100843
Title:
Live Migration
Thanks, that is very interesting. I'll be pushing qemu 1.3.0 to ubuntu
raring hopefully tomorrow - it would be interesting to know if this
still happens there.
** Changed in: qemu-kvm (Ubuntu)
Importance: Undecided => Medium
** Changed in: qemu-kvm (Ubuntu)
Status: Confirmed => Triaged
Status changed to 'Confirmed' because the bug affects multiple users.
** Changed in: qemu-kvm (Ubuntu)
Status: New => Confirmed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1100843
Title:
L
51 matches
Mail list logo