On 04/21/2016 05:54 PM, Lutz Vieweg wrote:
And indeed, the errors occured exactly at the time a backup procedure
was preparing a read-only snapshot with "btrfs subvolume snapshot -r" -
so until I can upgrade to a mainline kernel including the fix, I'll
pause the qemu process while the "btrfs subv
On Thu, 04/21 17:54, Lutz Vieweg wrote:
> Nevertheless, I think qemu could be somewhat more verbose, reporting
> when and why it stops emulation. Something like a message to the monitor
> or to standard out would be helpful to start with...
QEMU does report an error message to connected monitor if
On 04/20/2016 04:38 PM, Lutz Vieweg wrote:
I've now a
strace -f -p 10727 -e trace=pwrite,pwritev,fdatasync,file -t 2>&1 | gzip -1 -c
>trace.gz
attached to the qemu-process.
If the incident rate stays the same, by tomorrow I should be able
to correlate newly emitted I/O-errors in the guest wit
On 04/20/2016 01:50 PM, Kevin Wolf wrote:
To catch all possible write failures, I think pwrite, pwritev and
possibly fdatasync need to be considered.
I've now a
strace -f -p 10727 -e trace=pwrite,pwritev,fdatasync,file -t 2>&1 | gzip -1 -c
>trace.gz
attached to the qemu-process.
If the inci
Am 20.04.2016 um 04:11 hat Fam Zheng geschrieben:
> On Tue, 04/19 19:47, Lutz Vieweg wrote:
> > The guest drive parameters are:
> > > -drive
> > > "file=image.raw,if=virtio,format=raw,media=disk,cache=unsafe,werror=report,rerror=report"
>
> Given this implies aio=threads...
>
> > Can you provide
On Tue, 04/19 19:47, Lutz Vieweg wrote:
> The guest drive parameters are:
> > -drive
> > "file=image.raw,if=virtio,format=raw,media=disk,cache=unsafe,werror=report,rerror=report"
Given this implies aio=threads...
> Can you provide any hint on how to pursue the cause of these errors?
> (I thought
Hi,
I have been investigating strange stalls of virtual machines,
and realized that the VMs were (silently) paused because qemu
thinks there were I/O errors when writing to the host.
After using "werror=report,rerror=report" with "-drive" we now see
actual reporting of I/O errors to the guest, w
On Mon 08 Jun 2015 03:21:19 PM CEST, Stefan Hajnoczi
wrote:
> Please structure the patches so that each statistic or group of
> statistics has its own patch.
Yes, that's the plan.
>> uint64_t queue_depth[BLOCK_MAX_IOTYPE];
>>
>>Average number of requests. Similar to the previous one. It w
On Wed, Jun 03, 2015 at 03:40:42PM +0200, Alberto Garcia wrote:
Please structure the patches so that each statistic or group of
statistics has its own patch. That will make it easy to review and
possibly merge a subset if some of the statistics prove to be
controversial.
> uint64_t queue_depth[B
On Wed 03 Jun 2015 04:18:45 PM CEST, Eric Blake wrote:
>> The accounting stats are stored in the BlockDriverState, but they're
>> actually from the device backed by the BDS, so they could probably be
>> moved there. For the interface we could extend BlockDeviceStats and
>> add the new fields, but
On 06/03/2015 07:40 AM, Alberto Garcia wrote:
> Hello,
>
> I would like to retake the work that Benoît was about to start last
> year and extend the I/O accounting in QEMU. I was reading the past
> discussions and I will try to summarize all the ideas.
>
> The current accounting code collects the
Hello,
I would like to retake the work that Benoît was about to start last
year and extend the I/O accounting in QEMU. I was reading the past
discussions and I will try to summarize all the ideas.
The current accounting code collects the following information:
typedef struct BlockAcctStats {
On Fri, Sep 05, 2014 at 12:45:27PM -0400, Xingbo Wu wrote:
> On Fri, Sep 5, 2014 at 6:02 AM, Stefan Hajnoczi wrote:
>
> > On Thu, Sep 04, 2014 at 12:32:12PM -0400, Xingbo Wu wrote:
> > > After running a 16-thread sync-random-write test against qcow2, It is
> > > observed that QCOW2 seems to be
On Fri, Sep 5, 2014 at 6:02 AM, Stefan Hajnoczi wrote:
> On Thu, Sep 04, 2014 at 12:32:12PM -0400, Xingbo Wu wrote:
> > After running a 16-thread sync-random-write test against qcow2, It is
> > observed that QCOW2 seems to be serializing all its metadata-related
> writes.
> > If qcow2 is desig
On Thu, Sep 04, 2014 at 12:32:12PM -0400, Xingbo Wu wrote:
> After running a 16-thread sync-random-write test against qcow2, It is
> observed that QCOW2 seems to be serializing all its metadata-related writes.
> If qcow2 is designed to do this,* then what is the concern?* What would go
> wrong i
Hello guys,
After running a 16-thread sync-random-write test against qcow2, It is
observed that QCOW2 seems to be serializing all its metadata-related writes.
If qcow2 is designed to do this,* then what is the concern?* What would go
wrong if this ordering is relaxed?
By providing less features,
On Thu, Sep 05, 2013 at 10:18:28AM +0900, Jonghwan Choi wrote:
Thanks for posting these details.
Have you tried running x-data-plane=off with vcpu = 8 and how does the
performance compare to x-data-plane=off with vcpu = 1?
> > 1. The fio results so it's clear which cases performed worse and by h
specs including RAM and number of logical CPUs.
-> Host : 256GB, CPUs : 31, Guest : 48GB, VCPUs : 8
Thanks.
Best Regards.
> -Original Message-
> From: Stefan Hajnoczi [mailto:stefa...@gmail.com]
> Sent: Wednesday, September 04, 2013 5:59 PM
> To: Jonghwan Choi
> Cc: qemu-de
On Mon, Sep 02, 2013 at 05:24:09PM +0900, Jonghwan Choi wrote:
> Nowdays i measured io performance with Virtio-Blk-Data-Plane.
> There was something strange in test.
> When vcpu count is 1, io performance is increased in test
> But vcpu count is over 2, io performance is decreased in test.
>
> i u
Hello All.
Nowdays i measured io performance with Virtio-Blk-Data-Plane.
There was something strange in test.
When vcpu count is 1, io performance is increased in test
But vcpu count is over 2, io performance is decreased in test.
i used 3.10.9 stable kernel, qemu(1.4.2) & fio 2.1
What should i
Hello All.
Nowdays i measured io performance with Virtio-Blk-Data-Plane.
There was something strange in test.
When vcpu count is 1, io performance is increased in test
But vcpu count is over 2, io performance is decreased in test.
i used 3.10.9 stable kernel, qemu(1.4.2) & fio 2.1
What should i
On Fri, Mar 30, 2012 at 09:47:52AM +0530, PANKAJ RAWAT wrote:
> I am currently using backing file.The question of my concern is regarding
> the I/O operation
> Now when we create a external snapshot in qcow2, a new file is created
> leaving the original file as backing file
> *
> Can any one tell,*
Hi all,
I am currently using backing file.The question of my concern is regarding
the I/O operation
Now when we create a external snapshot in qcow2, a new file is created
leaving the original file as backing file
*
Can any one tell,*how the I/O is performed in detail way?.
Means when the new snap
Hi all,
I am currently using backing file.The question of my concern is regarding
the I/O operation
Now when we create a external snapshot in qcow2, a new file is created
leaving the original file as backing file
*
Can any one tell,*how the I/O is performed in detail way?.
Means when the new snap
Hi,
As of commit 12d4536f7d911b6d87a766ad7300482ea663cea2, the I/O thread is
now enabled by default.
I think we've done about as much testing and preparation as we can. If
you run into any odd regressions, you should be able to easily revert
this commit still. We can hold off making signif
hi guy,which file in QEMU is related with I/O activity ?
t; <[EMAIL PROTECTED]>
To:
Sent: Tuesday, September 05, 2006 9:18 PM
Subject: RE: [Qemu-devel] I/O port 0xc000
Is it for IDE port io access?
___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel
Is it for IDE port io access?
Thanks
Yunhong Jiang
>-Original Message-
>From: [EMAIL PROTECTED]
>[mailto:[EMAIL PROTECTED]
>On Behalf Of Siim S?ber
>Sent: 2006年9月5日 7:38
>To: qemu-devel@nongnu.org
>Subject: [Qemu-devel] I/O port 0xc000
>
>Hello all.
>
Hello all.
Can anyone tell me what is at pot 0xc000 in qemu? I need to disable it.
Thanks in advance.
Siim Sober
___
Qemu-devel mailing list
Qemu-devel@nongnu.org
http://lists.nongnu.org/mailman/listinfo/qemu-devel
29 matches
Mail list logo