On Thu, Jul 3, 2014 at 7:56 PM, Paolo Bonzini wrote:
> Il 03/07/2014 13:50, Ming Lei ha scritto:
>
>>> Yes, you can just move the atomic_inc/atomic_dec in aio_poll.
>>
>>
>> If you mean move inc/dec of 'running' in aio_poll, that won't work.
>> When aio_notify() sees 'running', it won't set notifi
Il 03/07/2014 13:50, Ming Lei ha scritto:
Yes, you can just move the atomic_inc/atomic_dec in aio_poll.
If you mean move inc/dec of 'running' in aio_poll, that won't work.
When aio_notify() sees 'running', it won't set notifier, and may
trap to ppoll().
I mean move it to aio_poll, around the
On Thu, Jul 3, 2014 at 6:29 PM, Paolo Bonzini wrote:
> Il 03/07/2014 06:54, Ming Lei ha scritto:
>
>> On Thu, Jul 3, 2014 at 12:21 AM, Paolo Bonzini
>> wrote:
>>>
>>> Il 02/07/2014 17:45, Ming Lei ha scritto:
The attachment debug patch skips aio_notify() if qemu_bh_schedule
is runn
Il 03/07/2014 06:54, Ming Lei ha scritto:
On Thu, Jul 3, 2014 at 12:21 AM, Paolo Bonzini wrote:
Il 02/07/2014 17:45, Ming Lei ha scritto:
The attachment debug patch skips aio_notify() if qemu_bh_schedule
is running from current aio context, but looks there is still 120K
writes triggered. (with
On Thu, Jul 3, 2014 at 12:21 AM, Paolo Bonzini wrote:
> Il 02/07/2014 17:45, Ming Lei ha scritto:
>> The attachment debug patch skips aio_notify() if qemu_bh_schedule
>> is running from current aio context, but looks there is still 120K
>> writes triggered. (without the patch, 400K can be observed
On Thu, Jul 3, 2014 at 12:38 AM, Paolo Bonzini wrote:
>
>> On Thu, Jul 3, 2014 at 12:23 AM, Paolo Bonzini wrote:
>> > Il 02/07/2014 18:13, Ming Lei ha scritto:
>> >
>> >> That must be for generating guest irq, which should have been
>> >> processed as batch easily.
>> >
>> >
>> > No, guest irqs a
> On Thu, Jul 3, 2014 at 12:23 AM, Paolo Bonzini wrote:
> > Il 02/07/2014 18:13, Ming Lei ha scritto:
> >
> >> That must be for generating guest irq, which should have been
> >> processed as batch easily.
> >
> >
> > No, guest irqs are generated (with notify_guest) on every I/O completion
> > eve
On Thu, Jul 3, 2014 at 12:23 AM, Paolo Bonzini wrote:
> Il 02/07/2014 18:13, Ming Lei ha scritto:
>
>> That must be for generating guest irq, which should have been
>> processed as batch easily.
>
>
> No, guest irqs are generated (with notify_guest) on every I/O completion
> even in 2.0.
In 2.0,
Il 02/07/2014 18:13, Ming Lei ha scritto:
That must be for generating guest irq, which should have been
processed as batch easily.
No, guest irqs are generated (with notify_guest) on every I/O completion
even in 2.0.
Paolo
Il 02/07/2014 17:45, Ming Lei ha scritto:
> The attachment debug patch skips aio_notify() if qemu_bh_schedule
> is running from current aio context, but looks there is still 120K
> writes triggered. (without the patch, 400K can be observed in
> same test)
Nice. Another observation is that after a
On Wed, Jul 2, 2014 at 11:45 PM, Ming Lei wrote:
> On Wed, Jul 2, 2014 at 4:54 PM, Stefan Hajnoczi wrote:
>> On Tue, Jul 01, 2014 at 06:49:30PM +0200, Paolo Bonzini wrote:
>>> Il 01/07/2014 16:49, Ming Lei ha scritto:
>>> >Let me provide some data when running randread(bs 4k, libaio)
>>> >from VM
On Wed, Jul 2, 2014 at 4:54 PM, Stefan Hajnoczi wrote:
> On Tue, Jul 01, 2014 at 06:49:30PM +0200, Paolo Bonzini wrote:
>> Il 01/07/2014 16:49, Ming Lei ha scritto:
>> >Let me provide some data when running randread(bs 4k, libaio)
>> >from VM for 10sec:
>> >
>> >1), qemu.git/master
>> >- write():
Il 02/07/2014 12:01, Kevin Wolf ha scritto:
Am 02.07.2014 um 11:48 hat Paolo Bonzini geschrieben:
Il 02/07/2014 11:39, Kevin Wolf ha scritto:
Am 02.07.2014 um 11:13 hat Paolo Bonzini geschrieben:
I don't think starting with that fast path as _the_ solution is a good
idea. It would essentially r
Am 02.07.2014 um 11:48 hat Paolo Bonzini geschrieben:
> Il 02/07/2014 11:39, Kevin Wolf ha scritto:
> >Am 02.07.2014 um 11:13 hat Paolo Bonzini geschrieben:
> >I don't think starting with that fast path as _the_ solution is a good
> >idea. It would essentially restrict dataplane to the scenarios th
Il 02/07/2014 11:39, Kevin Wolf ha scritto:
Am 02.07.2014 um 11:13 hat Paolo Bonzini geschrieben:
I don't think starting with that fast path as _the_ solution is a good
idea. It would essentially restrict dataplane to the scenarios that used
to work well in 2.0 - just look at what the block.c rea
Am 02.07.2014 um 11:13 hat Paolo Bonzini geschrieben:
> Il 02/07/2014 10:54, Stefan Hajnoczi ha scritto:
> >>Both can be eliminated by introducing a fast path in bdrv_aio_{read,write}v,
> >>that bypasses coroutines in the common case of no I/O throttling, no
> >>copy-on-write, etc.
> >
> >I tried t
Il 02/07/2014 10:54, Stefan Hajnoczi ha scritto:
Both can be eliminated by introducing a fast path in bdrv_aio_{read,write}v,
that bypasses coroutines in the common case of no I/O throttling, no
copy-on-write, etc.
I tried that in 2012 and couldn't measure an improvement above the noise
thresho
On Tue, Jul 01, 2014 at 06:49:30PM +0200, Paolo Bonzini wrote:
> Il 01/07/2014 16:49, Ming Lei ha scritto:
> >Let me provide some data when running randread(bs 4k, libaio)
> >from VM for 10sec:
> >
> >1), qemu.git/master
> >- write(): 731K
> >- rt_sigprocmask(): 417K
> >- read(): 21K
> >- ppoll():
On Wed, Jul 2, 2014 at 12:49 AM, Paolo Bonzini wrote:
> Il 01/07/2014 16:49, Ming Lei ha scritto:
>
>> Let me provide some data when running randread(bs 4k, libaio)
>> from VM for 10sec:
>>
>> 1), qemu.git/master
>> - write(): 731K
>> - rt_sigprocmask(): 417K
>> - read(): 21K
>> - ppoll(): 10K
>>
Il 01/07/2014 16:49, Ming Lei ha scritto:
Let me provide some data when running randread(bs 4k, libaio)
from VM for 10sec:
1), qemu.git/master
- write(): 731K
- rt_sigprocmask(): 417K
- read(): 21K
- ppoll(): 10K
- io_submit(): 5K
- io_getevents(): 4K
2), qemu 2.0
- write(): 9K
- read(): 28K
-
On Tue, Jul 1, 2014 at 10:31 PM, Stefan Hajnoczi wrote:
> On Tue, Jul 1, 2014 at 3:53 PM, Ming Lei wrote:
>> On Mon, Jun 30, 2014 at 4:08 PM, Stefan Hajnoczi wrote:
>>>
>>> Try:
>>> $ perf record -e syscalls:* --tid
>>> ^C
>>> $ perf script # shows the trace log
>>>
>>> The difference between s
On Tue, Jul 1, 2014 at 3:53 PM, Ming Lei wrote:
> On Mon, Jun 30, 2014 at 4:08 PM, Stefan Hajnoczi wrote:
>>
>> Try:
>> $ perf record -e syscalls:* --tid
>> ^C
>> $ perf script # shows the trace log
>>
>> The difference between syscalls in QEMU 2.0 and qemu.git/master could
>> reveal the problem
On Mon, Jun 30, 2014 at 4:08 PM, Stefan Hajnoczi wrote:
>
> Try:
> $ perf record -e syscalls:* --tid
> ^C
> $ perf script # shows the trace log
>
> The difference between syscalls in QEMU 2.0 and qemu.git/master could
> reveal the problem.
The difference is that there are tons of write() and rt_
On Mon, Jun 30, 2014 at 4:08 PM, Stefan Hajnoczi wrote:
> On Sat, Jun 28, 2014 at 05:58:58PM +0800, Ming Lei wrote:
>> On Sat, Jun 28, 2014 at 5:51 AM, Paolo Bonzini wrote:
>> > Il 27/06/2014 20:01, Ming Lei ha scritto:
>> >
>> >> I just implemented plug&unplug based batching, and it is working n
On Sat, Jun 28, 2014 at 05:58:58PM +0800, Ming Lei wrote:
> On Sat, Jun 28, 2014 at 5:51 AM, Paolo Bonzini wrote:
> > Il 27/06/2014 20:01, Ming Lei ha scritto:
> >
> >> I just implemented plug&unplug based batching, and it is working now.
> >> But throughout still has no obvious improvement.
> >>
On Sat, Jun 28, 2014 at 5:51 AM, Paolo Bonzini wrote:
> Il 27/06/2014 20:01, Ming Lei ha scritto:
>
>> I just implemented plug&unplug based batching, and it is working now.
>> But throughout still has no obvious improvement.
>>
>> Looks loading in IOthread is a bit low, so I am wondering if there
Il 27/06/2014 20:01, Ming Lei ha scritto:
I just implemented plug&unplug based batching, and it is working now.
But throughout still has no obvious improvement.
Looks loading in IOthread is a bit low, so I am wondering if there is
block point caused by Qemu QEMU block layer.
What does perf say
On Fri, Jun 27, 2014 at 8:01 PM, Stefan Hajnoczi wrote:
> On Thu, Jun 26, 2014 at 11:14:16PM +0800, Ming Lei wrote:
>> Hi Stefan,
>>
>> I found VM block I/O thoughput is decreased by more than 40%
>> on my laptop, and looks much worsen in my server environment,
>> and it is caused by your commit 5
On Fri, Jun 27, 2014 at 02:21:06PM +0200, Kevin Wolf wrote:
> Am 27.06.2014 um 14:01 hat Stefan Hajnoczi geschrieben:
> > On Thu, Jun 26, 2014 at 11:14:16PM +0800, Ming Lei wrote:
> > > Hi Stefan,
> > >
> > > I found VM block I/O thoughput is decreased by more than 40%
> > > on my laptop, and look
On Fri, Jun 27, 2014 at 2:23 PM, Kevin Wolf wrote:
> Am 27.06.2014 um 06:59 hat Paolo Bonzini geschrieben:
>> Il 27/06/2014 03:15, Ming Lei ha scritto:
>> >On Thu, Jun 26, 2014 at 11:57 PM, Paolo Bonzini wrote:
>> >>We can implement (advisory) calls like bdrv_plug/bdrv_unplug in order to
>> >>res
Am 27.06.2014 um 14:01 hat Stefan Hajnoczi geschrieben:
> On Thu, Jun 26, 2014 at 11:14:16PM +0800, Ming Lei wrote:
> > Hi Stefan,
> >
> > I found VM block I/O thoughput is decreased by more than 40%
> > on my laptop, and looks much worsen in my server environment,
> > and it is caused by your com
On Thu, Jun 26, 2014 at 11:14:16PM +0800, Ming Lei wrote:
> Hi Stefan,
>
> I found VM block I/O thoughput is decreased by more than 40%
> on my laptop, and looks much worsen in my server environment,
> and it is caused by your commit 580b6b2aa2:
>
> dataplane: use the QEMU block layer f
On Fri, Jun 27, 2014 at 12:59 PM, Paolo Bonzini wrote:
> Il 27/06/2014 03:15, Ming Lei ha scritto:
>>
>> On Thu, Jun 26, 2014 at 11:57 PM, Paolo Bonzini
>> wrote:
>>>
>>> We can implement (advisory) calls like bdrv_plug/bdrv_unplug in order to
>>> restore the previous levels of performance.
>>
>>
Il 27/06/2014 08:23, Kevin Wolf ha scritto:
Note that there is already an interface in block.c that takes multiple
requests at once, bdrv_aio_multiwrite(). It is currently used by
virtio-blk, even though not in dataplane mode. It also submits
individual requests to the block drivers currently, so
Am 27.06.2014 um 06:59 hat Paolo Bonzini geschrieben:
> Il 27/06/2014 03:15, Ming Lei ha scritto:
> >On Thu, Jun 26, 2014 at 11:57 PM, Paolo Bonzini wrote:
> >>We can implement (advisory) calls like bdrv_plug/bdrv_unplug in order to
> >>restore the previous levels of performance.
> >
> >Yes, that
Il 27/06/2014 03:15, Ming Lei ha scritto:
On Thu, Jun 26, 2014 at 11:57 PM, Paolo Bonzini wrote:
We can implement (advisory) calls like bdrv_plug/bdrv_unplug in order to
restore the previous levels of performance.
Yes, that is also what I am thinking, or interfaces like bdrv_queue_io()
and bd
On Thu, Jun 26, 2014 at 11:57 PM, Paolo Bonzini wrote:
> This is indeed a difference between the ioq-based and block-based backends.
> ioq could submit more than one request with the same io_submit system call.
>
Yes, I have been thinking that is advantage of qemu virtio dataplane, but
it isn't a
This is indeed a difference between the ioq-based and block-based
backends. ioq could submit more than one request with the same
io_submit system call.
We can implement (advisory) calls like bdrv_plug/bdrv_unplug in order to
restore the previous levels of performance.
Note that some fallout
On Thu, Jun 26, 2014 at 11:43 PM, Paolo Bonzini wrote:
> Il 26/06/2014 17:37, Ming Lei ha scritto:
>
>> On Thu, Jun 26, 2014 at 11:29 PM, Paolo Bonzini
>> wrote:
>>>
>>> Il 26/06/2014 17:14, Ming Lei ha scritto:
>>>
Hi Stefan,
I found VM block I/O thoughput is decreased by more tha
Il 26/06/2014 17:37, Ming Lei ha scritto:
On Thu, Jun 26, 2014 at 11:29 PM, Paolo Bonzini wrote:
Il 26/06/2014 17:14, Ming Lei ha scritto:
Hi Stefan,
I found VM block I/O thoughput is decreased by more than 40%
on my laptop, and looks much worsen in my server environment,
and it is caused by
On Thu, Jun 26, 2014 at 11:29 PM, Paolo Bonzini wrote:
> Il 26/06/2014 17:14, Ming Lei ha scritto:
>
>> Hi Stefan,
>>
>> I found VM block I/O thoughput is decreased by more than 40%
>> on my laptop, and looks much worsen in my server environment,
>> and it is caused by your commit 580b6b2aa2:
>>
>
Il 26/06/2014 17:14, Ming Lei ha scritto:
Hi Stefan,
I found VM block I/O thoughput is decreased by more than 40%
on my laptop, and looks much worsen in my server environment,
and it is caused by your commit 580b6b2aa2:
dataplane: use the QEMU block layer for I/O
I run fio with below
Hi Stefan,
I found VM block I/O thoughput is decreased by more than 40%
on my laptop, and looks much worsen in my server environment,
and it is caused by your commit 580b6b2aa2:
dataplane: use the QEMU block layer for I/O
I run fio with below config to test random read:
[global]
direc
43 matches
Mail list logo