On Tue, Sep 09, 2025 at 12:03:11PM -0400, Steven Sistare wrote:
> On 9/9/2025 11:24 AM, Peter Xu wrote:
> > On Tue, Sep 09, 2025 at 10:36:16AM -0400, Steven Sistare wrote:
> > > On 9/5/2025 12:48 PM, Peter Xu wrote:
> > > > Add Vladimir and Dan.
> > > > 
> > > > On Thu, Aug 14, 2025 at 10:17:14AM -0700, Steve Sistare wrote:
> > > > > This patch series adds the live migration cpr-exec mode.
> > > > > 
> > > > > The new user-visible interfaces are:
> > > > >     * cpr-exec (MigMode migration parameter)
> > > > >     * cpr-exec-command (migration parameter)
> > > > > 
> > > > > cpr-exec mode is similar in most respects to cpr-transfer mode, with 
> > > > > the
> > > > > primary difference being that old QEMU directly exec's new QEMU.  The 
> > > > > user
> > > > > specifies the command to exec new QEMU in the migration parameter
> > > > > cpr-exec-command.
> > > > > 
> > > > > Why?
> > > > > 
> > > > > In a containerized QEMU environment, cpr-exec reuses an existing QEMU
> > > > > container and its assigned resources.  By contrast, cpr-transfer mode
> > > > > requires a new container to be created on the same host as the target 
> > > > > of
> > > > > the CPR operation.  Resources must be reserved for the new container, 
> > > > > while
> > > > > the old container still reserves resources until the operation 
> > > > > completes.
> > > > > Avoiding over commitment requires extra work in the management layer.
> > > > 
> > > > Can we spell out what are these resources?
> > > > 
> > > > CPR definitely relies on completely shared memory.  That's already not a
> > > > concern.
> > > > 
> > > > CPR resolves resources that are bound to devices like VFIO by passing 
> > > > over
> > > > FDs, these are not over commited either.
> > > > 
> > > > Is it accounting QEMU/KVM process overhead?  That would really be 
> > > > trivial,
> > > > IMHO, but maybe something else?
> > > 
> > > Accounting is one issue, and it is not trivial.  Another is arranging 
> > > exclusive
> > > use of a set of CPUs, the same set for the old and new container, 
> > > concurrently.
> > > Another is avoiding namespace conflicts, the kind that make localhost 
> > > migration
> > > difficult.
> > > 
> > > > > This is one reason why a cloud provider may prefer cpr-exec.  A 
> > > > > second reason
> > > > > is that the container may include agents with their own connections 
> > > > > to the
> > > > > outside world, and such connections remain intact if the container is 
> > > > > reused.
> > > > 
> > > > We discussed about this one.  Personally I still cannot understand why 
> > > > this
> > > > is a concern if the agents can be trivially started as a new instance.  
> > > > But
> > > > I admit I may not know the whole picture.  To me, the above point is 
> > > > more
> > > > persuasive, but I'll need to understand which part that is over-commited
> > > > that can be a problem.
> > > 
> > > Agents can be restarted, but that would sever the connection to the 
> > > outside
> > > world.  With cpr-transfer or any local migration, you would need agents
> > > outside of old and new containers that persist.
> > > 
> > > With cpr-exec, connections can be preserved without requiring the end user
> > > to reconnect, and can be done trivially, by preserving chardevs.  With 
> > > that
> > > support in qemu, the management layer does nothing extra to preserve them.
> > > chardev support is not part of this series but is part of my vision,
> > > and makes exec mode even more compelling.
> > > 
> > > Management layers have a lot of code and complexity to manage live 
> > > migration,
> > > resources, and connections.  It requires modification to support 
> > > cpr-transfer.
> > > All that can be bypassed with exec mode.  Less complexity, less 
> > > maintainance,
> > > and  fewer points of failure.  I know this because I implemented exec 
> > > mode in
> > > OCI at Oracle, and we use it in production.
> > 
> > I wonders how this part works in Vladimir's use case.
> > 
> > > > After all, cloud hosts should preserve some extra memory anyway to make
> > > > sure dynamic resources allocations all the time (e.g., when live 
> > > > migration
> > > > starts, KVM pgtables can drastically increase if huge pages are enabled,
> > > > for PAGE_SIZE trackings), I assumed the over-commit portion should be 
> > > > less
> > > > that those.. and when it's also temporary (src QEMU will release all
> > > > resources after live upgrade) then it looks manageable. >>
> > > > > How?
> > > > > 
> > > > > cpr-exec preserves descriptors across exec by clearing the CLOEXEC 
> > > > > flag,
> > > > > and by sending the unique name and value of each descriptor to new 
> > > > > QEMU
> > > > > via CPR state.
> > > > > 
> > > > > CPR state cannot be sent over the normal migration channel, because 
> > > > > devices
> > > > > and backends are created prior to reading the channel, so this mode 
> > > > > sends
> > > > > CPR state over a second migration channel that is not visible to the 
> > > > > user.
> > > > > New QEMU reads the second channel prior to creating devices or 
> > > > > backends.
> > > > > 
> > > > > The exec itself is trivial.  After writing to the migration channels, 
> > > > > the
> > > > > migration code calls a new main-loop hook to perform the exec.
> > > > > 
> > > > > Example:
> > > > > 
> > > > > In this example, we simply restart the same version of QEMU, but in
> > > > > a real scenario one would use a new QEMU binary path in 
> > > > > cpr-exec-command.
> > > > > 
> > > > >     # qemu-kvm -monitor stdio
> > > > >     -object memory-backend-memfd,id=ram0,size=1G
> > > > >     -machine memory-backend=ram0 -machine aux-ram-share=on ...
> > > > > 
> > > > >     QEMU 10.1.50 monitor - type 'help' for more information
> > > > >     (qemu) info status
> > > > >     VM status: running
> > > > >     (qemu) migrate_set_parameter mode cpr-exec
> > > > >     (qemu) migrate_set_parameter cpr-exec-command qemu-kvm ... 
> > > > > -incoming file:vm.state
> > > > >     (qemu) migrate -d file:vm.state
> > > > >     (qemu) QEMU 10.1.50 monitor - type 'help' for more information
> > > > >     (qemu) info status
> > > > >     VM status: running
> > > > > 
> > > > > Steve Sistare (9):
> > > > >     migration: multi-mode notifier
> > > > >     migration: add cpr_walk_fd
> > > > >     oslib: qemu_clear_cloexec
> > > > >     vl: helper to request exec
> > > > >     migration: cpr-exec-command parameter
> > > > >     migration: cpr-exec save and load
> > > > >     migration: cpr-exec mode
> > > > >     migration: cpr-exec docs
> > > > >     vfio: cpr-exec mode
> > > > 
> > > > The other thing is, as Vladimir is working on (looks like) a cleaner 
> > > > way of
> > > > passing FDs fully relying on unix sockets, I want to understand better 
> > > > on
> > > > the relationships of his work and the exec model.
> > > 
> > > His work is based on my work -- the ability to embed a file descriptor in 
> > > a
> > > migration stream with a VMSTATE_FD declaration -- so it is compatible.
> > > 
> > > The cpr-exec series preserves VMSTATE_FD across exec by remembering the fd
> > > integer and embedding that in the data stream.  See the changes in 
> > > vmstate-types.c
> > > in [PATCH V3 7/9] migration: cpr-exec mode.
> > > 
> > > Thus cpr-exec will still preserve tap devices via Vladimir's code.
> > > > I still personally think we should always stick with unix sockets, but 
> > > > I'm
> > > > open to be convinced on above limitations.  If exec is better than
> > > > cpr-transfer in any way, the hope is more people can and should adopt 
> > > > it.
> > > 
> > > Various people and companies have expressed interest in CPR and want to 
> > > explore
> > > cpr-exec.  Vladimir was one, he chose transfer instead, and that is fine, 
> > > but
> > > give people the option.  And Oracle continues to use cpr-exec mode.
> > 
> > How does cpr-exec guarantees everything will go smoothly with no failure
> > after the exec?  Essentially, this is Vladimir's question 1.
> 
> Live migration can fail if dirty memory copy does not converge.  CPR does not.

As we're comparing cpr-transfer and cpr-exec, this one doesn't really count, 
AFAIU.

> cpr-transfer can fail if it fails to create a new container.  cpr-exec cannot.
> cpr-transfer can fail to allocate resources.  cpr-exec needs less.

These two could happen in very occpied hosts indeed, but is it really that
common an issue when ignoring the whole guest memory section after all?

> 
> cpr-exec failure is almost always due to a QEMU bug.  For example, a new 
> feature
> has been added to new QEMU, and is *not* forced to false in a compatibility 
> entry
> for the old machine model. We do our best to find and fix those before going 
> into
> production. In production, the success rate is high. That is one reason I 
> like the
> mode so much.

Yes, but this is still a major issue.  The problem is I don't think we have
good way to provide 100% coverage on the code base covering all kinds of
migrations.

After all, we have tons of needed() fields in VMSD, we need to always be
prepared that the migration stream can change from time to time with
exactly the same device setup, and some of them may prone to put() failures
on the other side.

After all, live migration was designed to be fine with such, so at least VM
won't crash on src if anything happens.

Precopy always does that, we're trying to make postcopy do the same, which
Juraj is working on, so that postcopy can FAIL and rollback to src too if
device state doesn't apply all fine.

It's still not uncommon to have guest OS / driver behavior change causing
some corner case migration failures but only when applying the states.

That's IMHO a high risk even if low possibility.

> 
> > Feel free to
> > answer there, because there's also question 2 (which we used to cover some
> > but maybe not as much).
> 
> Question 2 is about minimizing downtime by starting new QEMU while old QEMU
> is still running.  That is true, but the savings are small.

I thought we discussed about this, and it should be known to have at least
below two major part of things that will increase downtime (either directly
accounted into downtime, or slow down vcpus later)?

  - Process pgtable, aka, QEMU's view of guest mem
  - EPT pgtable, aka, vCPU's view of guest mem

Populating these should normally take time when VM becomes huge, while
cpr-transfer can still benefit on pre-populations before switchover.

IIUC that's a known issue, but please correct me if I remembered it wrong.
I think it means this issue is more severe with larger VMs, which is a
trade-off.  It's just that I don't know what else might be relevant.

Personally I don't think this is a blocker for cpr-exec, but we should IMHO
record the differences.  It would be best, IMHO, to have a section in
cpr.rst to discuss this, helping user decide which to choose when both
benefits from CPR in general.

Meanwhile, just to mention unit test for cpr-exec is still missing.

> > The other thing I don't remember if we discussed, on how cpr-exec manages
> > device hotplugs. Say, what happens if there are devices hot plugged (via
> > QMP) then cpr-exec migration happens?One method: start new qemu with the 
> > original command-line arguments plus -S, then
> mgmt re-sends the hot plug commands to the qemu monitor.  Same as for live
> migration.
> > Does cpr-exec cmdline needs to convert all QMP hot-plugged devices into
> > cmdlines and append them?
> That also works, and is a technique I have used to reduce guest pause time.
> 
> > How to guarantee src/dst device topology match
> > exactly the same with the new cmdline?
> 
> That is up to the mgmt layer, to know how QEMU was originally started, and
> what has been hot plugged afterwards.  The fast qom-list-get command that
> I recently added can help here.

I see.  If you think that is the best way to consume cpr-exec, would you
add a small section into the doc patch for it as well?

-- 
Peter Xu


Reply via email to