Hi Christian,

I've been working on testing the Bionic+UCA-Stein combination (libvirt 
5.0.0-1ubuntu2~cloud0, qemu 1:3.1+dfsg-2ubuntu3~cloud0) and have been finding 
live migration to be very unstable from stock Bionic hypervisors (libvirt 
4.0.0-1ubuntu8.8, qemu 1:2.11+dfsg-1ubuntu7.12). Also tried in-place package 
upgrade from stock-> UCA-Stein. The VMs randomly end up stuck "paused" on both 
source/destination while the live migration is in progress, and qemu is 
unresponsive to monitor commands and logs show messages such as:
2019-04-11 18:41:41.349+0000: 6102: warning : 
qemuDomainObjBeginJobInternal:4933 : Cannot start job (query, none) for domain 
one-362495; current job is (query, none) owned by (43880 
remoteDispatchDomainBlockStatsFlags, 0 <null>) for (31s, 0s)
2019-04-11 18:41:41.349+0000: 6102: error : qemuDomainObjBeginJobInternal:4945 
: Timed out during operation: cannot acquire state change lock (held by 
remoteDispatchDomainBlockStatsFlags)
2019-04-11 18:41:41.350+0000: 6099: error : virNetSocketReadWire:1796 : Cannot 
recv data: Connection reset by peer
2019-04-11 18:42:38.811+0000: 6100: warning : 
qemuDomainObjBeginJobInternal:4933 : Cannot start job (query, none) for domain 
one-362495; current job is (query, none) owned by (43880 
remoteDispatchDomainBlockStatsFlags, 0 <null>) for (88s, 0s)
2019-04-11 18:42:38.811+0000: 6100: error : qemuDomainObjBeginJobInternal:4945 
: Timed out during operation: cannot acquire state change lock (held by 
remoteDispatchDomainBlockStatsFlags)
2019-04-11 18:42:38.812+0000: 6099: error : virNetSocketReadWire:1811 : End of 
file while reading data: Input/output error
2019-04-11 18:42:49.434+0000: 6104: warning : 
qemuDomainObjBeginJobInternal:4933 : Cannot start job (query, none) for domain 
one-362495; current job is (query, none) owned by (43880 
remoteDispatchDomainBlockStatsFlags, 0 <null>) for (99s, 0s)

It never finishes so I have to do virsh destroy to "recover" from it.

But that's not really what this ticket is about... Because of these
issues I haven't been able to test the Trusty -> Bionic+UCA-Stein
migration, as Bionic+UCA-Stein seems even more unstable :-(

Not sure how best to proceed with troubleshooting for you at this point,
I feel that Xenial is my only option...

Corey Melanson

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1826051

Title:
  VMs go to 100% CPU after live migration from Trusty to Bionic

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1826051/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to