> > >> >> > Right now, we don't have an interface to detect that cases and
> > >> >> > got back to the iterative stage.
> > >> >>
> > >> >> How about go back to the iterative stage when detect that the
> > >> >> pending_size is larger Than max_size, like this:
> > >> >>
> > >> >> +/
> > > Then, how to deal with this issue in 2.3, leave it here? or make an
> > > incomplete fix like I do above?
> >
> > I think it is better to leave it here for 2.3. With a patch like this
> > one, we improve in one load and we got worse in a different load
> > (depens a lot in the ratio of dirtyi
Am 25.03.2015 um 11:50 hat Juan Quintela geschrieben:
> "Li, Liang Z" wrote:
> >> >> > Right now, we don't have an interface to detect that cases and got
> >> >> > back to the iterative stage.
> >> >>
> >> >> How about go back to the iterative stage when detect that the
> >> >> pending_size is lar
"Li, Liang Z" wrote:
>> >> > Right now, we don't have an interface to detect that cases and got
>> >> > back to the iterative stage.
>> >>
>> >> How about go back to the iterative stage when detect that the
>> >> pending_size is larger Than max_size, like this:
>> >>
>> >> +/* do f
> >> > Right now, we don't have an interface to detect that cases and got
> >> > back to the iterative stage.
> >>
> >> How about go back to the iterative stage when detect that the
> >> pending_size is larger Than max_size, like this:
> >>
> >> +/* do flush here is aimed to shorten
* Li, Liang Z (liang.z...@intel.com) wrote:
> > * Li, Liang Z (liang.z...@intel.com) wrote:
> > > > > > First explanation, why I think this don't fix the full problem.
> > > > > > Whith this patch, we fix the problem where we have a dirty block
> > > > > > layer but basically nothing dirtying the m
> * Li, Liang Z (liang.z...@intel.com) wrote:
> > > > > First explanation, why I think this don't fix the full problem.
> > > > > Whith this patch, we fix the problem where we have a dirty block
> > > > > layer but basically nothing dirtying the memory on the guest (we
> > > > > are moving the 20 s
* Li, Liang Z (liang.z...@intel.com) wrote:
> > > > First explanation, why I think this don't fix the full problem.
> > > > Whith this patch, we fix the problem where we have a dirty block
> > > > layer but basically nothing dirtying the memory on the guest (we are
> > > > moving the 20 seconds fro
Am 18.03.2015 um 13:36 hat Juan Quintela geschrieben:
> Kevin Wolf wrote:
> > The problem is that the block layer really doesn't have an option to
> > control what is getting synced once the data is cached outside of qemu.
> > Essentially we can do an fdatasync() or we can leave it, that's the onl
> > > First explanation, why I think this don't fix the full problem.
> > > Whith this patch, we fix the problem where we have a dirty block
> > > layer but basically nothing dirtying the memory on the guest (we are
> > > moving the 20 seconds from max_downtime for the blocklayer flush),
> > > to 2
On 18/03/2015 13:36, Juan Quintela wrote:
> I know that the code has changed a lot on that area, the select() don't
> exist anymore.
It is still there in aio_poll():
ret = qemu_poll_ns((GPollFD *)ctx->pollfds->data,
ctx->pollfds->len,
blocki
Kevin Wolf wrote:
> [ Cc: qemu-block ]
>
> Am 18.03.2015 um 04:19 hat Li, Liang Z geschrieben:
>> > This needs further review/changes on the block layer.
>> >
>> > First explanation, why I think this don't fix the full problem.
>> > Whith this patch, we fix the problem where we have a dirty block
[ Cc: qemu-block ]
Am 18.03.2015 um 04:19 hat Li, Liang Z geschrieben:
> > This needs further review/changes on the block layer.
> >
> > First explanation, why I think this don't fix the full problem.
> > Whith this patch, we fix the problem where we have a dirty block layer but
> > basically not
> This needs further review/changes on the block layer.
>
> First explanation, why I think this don't fix the full problem.
> Whith this patch, we fix the problem where we have a dirty block layer but
> basically nothing dirtying the memory on the guest (we are moving the 20
> seconds from max_dow
Liang Li wrote:
> If there are file write operations in the guest when doing live
> migration, the VM downtime will be much longer than the max_downtime,
> this is caused by bdrv_flush_all(), this function is a time consuming
> operation if there a lot of data have to be flushed to disk.
>
> By ad
If there are file write operations in the guest when doing live
migration, the VM downtime will be much longer than the max_downtime,
this is caused by bdrv_flush_all(), this function is a time consuming
operation if there a lot of data have to be flushed to disk.
By adding bdrv_flush_all() before
16 matches
Mail list logo