On Wed, Oct 01, 2025 at 16:10:44 +0100, Daniel P. Berrangé wrote:
> On Wed, Oct 01, 2025 at 11:05:59AM +0000, Dr. David Alan Gilbert wrote:
> > * Jiří Denemark ([email protected]) wrote:
> > > On Tue, Sep 30, 2025 at 16:04:54 -0400, Peter Xu wrote:
> > > > On Tue, Sep 30, 2025 at 09:53:31AM +0200, Jiří Denemark wrote:
> > > > > On Thu, Sep 25, 2025 at 14:22:06 -0400, Peter Xu wrote:
> > > > > > On Thu, Sep 25, 2025 at 01:54:40PM +0200, Jiří Denemark wrote:
> > > > > > > On Mon, Sep 15, 2025 at 13:59:15 +0200, Juraj Marcin wrote:
> > > > > > So far, dest QEMU will try to resume the VM after getting RUN 
> > > > > > command, that
> > > > > > is what loadvm_postcopy_handle_run_bh() does, and it will (when 
> > > > > > autostart=1
> > > > > > set): (1) firstly try to activate all block devices, iff it 
> > > > > > succeeded, (2)
> > > > > > do vm_start(), at the end of which RESUME event will be generated.  
> > > > > > So
> > > > > > RESUME currently implies both disk activation success, and vm start 
> > > > > > worked.
> > > > > > 
> > > > > > > may still fail when locking disks fails (not sure if this is the 
> > > > > > > only
> > > > > > > way cont may fail). In this case we cannot cancel the migration 
> > > > > > > on the
> > > > > > 
> > > > > > Is there any known issue with locking disks that dest would fail?  
> > > > > > This
> > > > > > really sound like we should have the admin taking a look.
> > > > > 
> > > > > Oh definitely, it would be some kind of an storage access issue on the
> > > > > destination. But we'd like to give the admin an option to actually do
> > > > > anything else than just killing the VM :-) Either by automatically
> > > > > canceling the migration or allowing recovery once storage issues are
> > > > > solved.
> > > > 
> > > > The problem is, if the storage locking stopped working properly, then 
> > > > how
> > > > to guarantee the shared storage itself is working properly?
> > > > 
> > > > When I was replying previously, I was expecting the admin taking a look 
> > > > to
> > > > fix the storage, I didn't expect the VM can still be recovered anymore 
> > > > if
> > > > there's no confidence that the block devices will work all fine.  The
> > > > locking errors to me may imply a block corruption already, or should I 
> > > > not
> > > > see it like that?
> > > 
> > > If the storage itself is broken, there's clearly nothing we can do. But
> > > the thing is we're accessing it from two distinct hosts. So while it may
> > > work on the source, it can be broken on the destination. For example,
> > > connection between the destination host and the storage may be broken.
> > > Not sure how often this can happen in real life, but we have a bug
> > > report that (artificially) breaking storage access on the destination
> > > results in paused VM on the source which can only be killed.
> > 
> > I've got a vague memory that a tricky case is when some of your storage
> > devices are broken on the destination, but not all.
> > So you tell the block layer you want to take them on the destination
> > some take their lock, one fails;  now what state are you in?
> > I'm not sure if the block layer had a way of telling you what state
> > you were in when I was last involved in that.
> 
> As long as the target QEMU CPUs have NOT started running, then
> no I/O writes should have been sent to the storage, so the storage
> should still be in a consistent state, and thus we can still try
> to fail back to the source QEMU.
> 
> The "fun" problem here is that just because we /try/ to fail back
> to the source QEMU, does not imply the source QEMU will now succeed
> in re-acquiring the locks it just released a short time ago.

Right, but if we manage to get the source QEMU out of postcopy migration
is such scenario, the VM may be manually resumed once storage issues are
solved. So yes, we can't always rollback, but we should at least allow
some kind of manual recovery.

> Consider the classic dead NFS server problem. The target may have
> acquired 1 lock and failed on another lock because of a service
> interruption to the NFS server. Well the target can't neccessarily
> release the lock that it did successfully acquire now. So if we
> fail back to the source, it'll be unable to reacquire the lock as
> the target still holds it.
> 
> This doesn't mean we shouldn't try to fail back, but there will
> always be some failures scenarios we'll struggle to recover from.
> 
> The "migration paused" state is our last chance, as that leaves
> both QEMU's present while the admin tries to fix the underlying
> problems.

IIUC from my conversation with Juraj switching to postcopy-paused can
only happen when CPUs were already started.

Jirka


Reply via email to