On 02/24/2017 03:31 PM, John Snow wrote: >> >> But the Backup Server could instead connect to the NAS directly avoiding >> load on the frontent LAN >> and the Qemu Node. >> > > In a live backup I don't see how you will be removing QEMU from the data > transfer loop. QEMU is the only process that knows what the correct view > of the image is, and needs to facilitate. > > It's not safe to copy the blocks directly without QEMU's mediation.
Although we may already have enough tools in place to help achieve that:
create a temporary qcow2 wrapper around the primary image via external
snapshot, so that the primary image is now read-only in qemu; then use
whatever block-status mechanism (whether the NBD block status extension,
or directly reading from a persistent bitmap) to facilitate whatever
more efficient offline transfer of just the relevant portions of that
main file, then live block-commit to get qemu to start writing to the
file again.
In other words, any time your algorithm wants to cause an I/O freeze to
a particular file, the solution is to add a qcow2 external snapshot
followed by a live commit.
So tweaking the proposal a few mails ago:
fsfreeze (optional)
create qcow2 snapshot wrapper as a write lock (via QMP)
fsthaw - now with no risk of violating guest timing constraints
dirtymap = find all blocks that are dirty since last backup (via named
bitmap/NBD block status)
foreach block in dirtymap {
copy to backup via external software
}
live commit image (via QMP)
The window where guest I/O is frozen is small (the freeze/snapshot
create/thaw steps can be done in less than a second), while the window
where you are extracting incremental backup data is longer (during that
time, guest I/O is happening into a wrapper qcow2 file).
--
Eric Blake eblake redhat com +1-919-301-3266
Libvirt virtualization library http://libvirt.org
signature.asc
Description: OpenPGP digital signature
