commit a6f230c move blockbackend back to main AioContext on unplug. It set the
AioContext of
SCSIDevice to the main AioContex, but s->ctx is still the iothread AioContex(if
the scsi controller
is configure with iothread). So if there are having in-flight requests during
unplug, a failing asserti
commit a6f230c move blockbackend back to main AioContext on unplug. It set the
AioContext of
SCSIDevice to the main AioContex, but s->ctx is still the iothread AioContex(if
the scsi controller
is configure with iothread). So if there are having in-flight requests during
unplug, a failing asserti
main thread and the iothread are in parallel.
On 2019/7/17 16:41, Kevin Wolf wrote:
Am 16.07.2019 um 04:06 hat l00284672 geschrieben:
Forwarded Message
Subject:virtio_scsi_ctx_check failed when detach virtio_scsi disk
Date: Mon, 15 Jul 2019 23:34:24 +0800
From
ping?
On 2019/7/16 10:06, l00284672 wrote:
Forwarded Message
Subject:virtio_scsi_ctx_check failed when detach virtio_scsi disk
Date: Mon, 15 Jul 2019 23:34:24 +0800
From: l00284672
To: kw...@redhat.com, be...@igalia.com, Stefan Hajnoczi
, Paolo Bonzini
CC
Forwarded Message
Subject:virtio_scsi_ctx_check failed when detach virtio_scsi disk
Date: Mon, 15 Jul 2019 23:34:24 +0800
From: l00284672
To: kw...@redhat.com, be...@igalia.com, Stefan Hajnoczi
, Paolo Bonzini
CC: lizhen...@huawei.com
I found a problem
-- Would the "open" hang as well in that case?
The "open" doesn't hang in that case.
Do you have any better solutions to solve this problem in the case?
On 2019/6/11 0:03, Paolo Bonzini wrote:
On 10/06/19 16:51, l00284672 wrote:
The pread will hang in attachi
The pread will hang in attaching disk just when backend storage network
disconnection .
I think the locking range of qemu_global_mutex is too large when do qmp
operation. what
does the qemu_global_mutex really protect? what is the risk of
unlocking qemu_global_mutex
in qmp?
On 2019/6/1
Ping?
On 2019/6/4 16:53, l00284672 wrote:
Hi, I found a problem that virtual machine cpu soft lockup when I
attach a disk to the vm in the case that
backend storage network has a large delay or IO pressure is too large.
1) The disk xml which I attached is
Hi, I found a problem that virtual machine cpu soft lockup when I
attach a disk to the vm in the case that
backend storage network has a large delay or IO pressure is too large.
1) The disk xml which I attached is:
2) The bt of qemu main
Hi, I found a problem that virtual machine cpu soft lock when I attach
a disk to the vm in the case that
backend storage network has a large delay or IO pressure is too large.
1) The disk xml which I attached is:
2) The bt of qemu main thre
Ok, I will test your patch soon.
On 2018/7/10 23:50, Stefan Hajnoczi wrote:
The virtio-scsi command virtqueues run during hotplug. This creates the
possibility of race conditions since the guest can submit commands while the
monitor is performing hotplug.
See Patch 2 for a fix for the ->reset
yes, I also think so.
On 2018/7/4 0:27, Paolo Bonzini wrote:
On 03/07/2018 09:20, l00284672 wrote:
}
The scsi inqury request from guest is cancelled by qemu. The qemu bt is
below:
(gdb) bt
#0 scsi_req_cancel_async (req=0x7f86d00055c0, notifier=0x0) at
hw/scsi/scsi-bus.c:1825
#1
w to avoid it after calling
virtio_scsi_push_event.
On 2018/7/3 15:20, l00284672 wrote:
The disk missing due to calling scsi_probe_lun failed in guest. The
guest code is below:
static int scsi_probe_lun(struct scsi_device *sdev, unsigned char
*inq_result,
int result_len, int *bfla
ests in device_reset.
On 2018/7/2 21:15, Stefan Hajnoczi wrote:
On Wed, Jun 27, 2018 at 06:33:16PM +0800, l00284672 wrote:
Hi, I found a bug that disk missing (not all disks missing ) in the guest
contingently when hotplug several virtio scsi disks consecutively. After
rebooting the guest,
For the
ping
On 2018/6/27 18:33, l00284672 wrote:
Hi, I found a bug that disk missing (not all disks missing ) in the
guest contingently when hotplug several virtio scsi disks
consecutively. After rebooting the guest,
the missing disks appear again.
The guest is centos7.3 running on a centos7.3
Hi, I found a bug that disk missing (not all disks missing ) in the
guest contingently when hotplug several virtio scsi disks
consecutively. After rebooting the guest,
the missing disks appear again.
The guest is centos7.3 running on a centos7.3 host and the scsi
controllers are configed wit
mirror_exit.
On 2018/6/12 9:45, Fam Zheng wrote:
On Mon, 06/11 11:31, l00284672 wrote:
I tried your patch with my modification below can slove this problem.
void blk_set_aio_context(BlockBackend *blk, AioContext *new_context)
{
BlockDriverState *bs = blk_bs(blk);
ThrottleGroupMember *tgm
ping
On 2018/6/11 11:31, l00284672 wrote:
I tried your patch with my modification below can slove this problem.
void blk_set_aio_context(BlockBackend *blk, AioContext *new_context)
{
BlockDriverState *bs = blk_bs(blk);
ThrottleGroupMember *tgm = &blk->public.throttle_group
}
}
I add bdrv_ref before bdrv_set_aio_context to avoid bs freed in
mirror_exit. Do you agree with
my modification ?
On 2018/6/11 11:01, l00284672 wrote:
Thanks for your reply.
I tried your patch but it didn't work for qemu crashed. The qemu
crash bt is below:
(gdb) bt
#0 bdrv_det
r_exit is done in
aio_poll(qemu_get_aio_context(), true). In
mirror_exit, the top bs willl be free by bdrv_unref. So it will make a
Null pointer access in the follow-up procedure.
On 2018/6/10 15:43, Fam Zheng wrote:
On Sat, 06/09 17:10, l00284672 wrote:
Hi, I found a dead loop in qemu when
Hi, I found a dead loop in qemu when do blockJobAbort and vm suspend
coinstantaneously.
The qemu bt is below:
#0 0x7ff58b53af1f in ppoll () from /lib64/libc.so.6
#1 0x007fdbd9 in ppoll (__ss=0x0, __timeout=0x7ffcf7055390,
__nfds=, __fds=) at
/usr/include/bits/poll2.h:77
#2 qemu
ok,thanks !
On 2017/11/10 23:33, Stefan Hajnoczi wrote:
On Sat, Oct 21, 2017 at 01:34:00PM +0800, Zhengui Li wrote:
From: Zhengui
In blk_remove_bs, all I/O should be completed before removing throttle
timers. If there has inflight I/O, removing throttle timers here will
cause the inflight I/
22 matches
Mail list logo