> > > > + for (; start < end; start++) { > > > > + if (block_job_is_cancelled(&job->common)) { > > > > + ret = -1; > > > > + break; > > > > + } > > > > + > > > > + /* we need to yield so that qemu_aio_flush() returns. > > > > + * (without, VM does not reboot) > > > > + * Note: use 1000 instead of 0 (0 prioritize this task too > > > > + much) > > > > > > indentation > > > > > > What does "0 prioritize this task too much" mean? If no rate limit > > > has been set the job should run at full speed. We should not > > > hardcode arbitrary delays like 1000. > > > > The VM itself gets somehow slower during backup - do not know why. As > workaround sleep 1000 works. > > Please find out why, it's a bug that an arbitrary sleep hides but doesn't fix > (plus > the sleep makes backup less efficient). > > If the VM becomes slow this loop is probably "spinning" without doing blocking > I/O and only doing sleep 0. I guess that can happen when you loop over blocks > that have already been backed up (bit has been set)?
Well, 'slow' is the wrong term. The VM just gets a bit unresponsive - its just a feeling. I think this is because the backup job runs at same priority as normal guest IO. We previously used LVM and run backup with 'idle' IO priority (CFQ) to avoid such behavior. But qemu does not provide an IO queue where we can set scheduling priorities?