Am 08.09.2015 um 10:00 hat Denis V. Lunev geschrieben:
> From: Raushaniya Maksudova <[email protected]>
>
> If disk-deadlines option is enabled for a drive, one controls time
> completion of this drive's requests. The method is as follows (further
> assume that this option is enabled).
>
> Every drive has its own red-black tree for keeping its requests.
> Expiration time of the request is a key, cookie (as id of request) is an
> appropriate node. Assume that every requests has 8 seconds to be completed.
> If request was not accomplished in time for some reasons (server crash or
> smth else), timer of this drive is fired and an appropriate callback
> requests to stop Virtial Machine (VM).
>
> VM remains stopped until all requests from the disk which caused VM's
> stopping are completed. Furthermore, if there is another disks whose
> requests are waiting to be completed, do not start VM : wait completion
> of all "late" requests from all disks.
>
> Signed-off-by: Raushaniya Maksudova <[email protected]>
> Signed-off-by: Denis V. Lunev <[email protected]>
> CC: Stefan Hajnoczi <[email protected]>
> CC: Kevin Wolf <[email protected]>
> + disk_deadlines->expired_tree = true;
> + need_vmstop = !atomic_fetch_inc(&num_requests_vmstopped);
> + pthread_mutex_unlock(&disk_deadlines->mtx_tree);
> +
> + if (need_vmstop) {
> + qemu_system_vmstop_request_prepare();
> + qemu_system_vmstop_request(RUN_STATE_PAUSED);
> + }
> +}
What behaviour does this result in? If I understand correctly, this is
an indirect call of do_vm_stop(), which involves a bdrv_drain_all(). In
this case, qemu would completely block (including unresponsive monitor)
until the request can complete.
Is this what you are seeing with this patch, or why doesn't the
bdrv_drain_all() call cause such effects?
Kevin