Ccing live migration developers who should be interested in this work,
On Mon, 12 Nov 2012 21:10:32 -0200
Marcelo Tosatti wrote:
> On Mon, Nov 05, 2012 at 05:59:26PM +0800, Xiao Guangrong wrote:
> > Do not drop large spte until it can be insteaded by small pages so that
> > the guest can happliy
On Tue, 19 Jun 2012 09:01:36 -0500
Anthony Liguori wrote:
> I'm not at all convinced that postcopy is a good idea. There needs a clear
> expression of what the value proposition is that's backed by benchmarks.
> Those
> benchmarks need to include latency measurements of downtime which so far
On Wed, 02 May 2012 14:33:55 +0300
Avi Kivity wrote:
> > =
> > perf top -t ${QEMU_TID}
> > =
> > 51.52% qemu-system-x86_64 [.] memory_region_get_dirty
> > 16.73% qemu-system-x86_64 [.] ram_save_remaining
> >
>
> memory_regio
quire
On Sat, 28 Apr 2012 19:05:44 +0900
Takuya Yoshikawa wrote:
> 1. Problem
> During live migration, if the guest tries to take mmu_lock at the same
> time as GET_DIRTY_LOG, which is called periodically by QEMU, it may be
> forced to wait long time; this is not restrict
Avi Kivity wrote:
> > > Slot searching is quite fast since there's a small number of slots, and
> > > we sort the larger ones to be in the front, so positive lookups are fast.
> > > We cache negative lookups in the shadow page tables (an spte can be
> > > either "not mapped", "mapped to RAM"
Hope to get comments from live migration developers,
Anthony Liguori wrote:
> > Guest memory management
> > ---
> > Instead of managing each memory slot individually, a single API will be
> > provided that replaces the entire guest physical memory map atomically.
> > This mat
(2012/01/13 10:09), Benoit Hudzia wrote:
Hi,
Sorry to jump to hijack the thread like that , however i would like
to just to inform you that we recently achieve a milestone out of the
research project I'm leading. We enhanced KVM in order to deliver
post copy live migration using RDMA at kernel
(2012/01/01 18:52), Dor Laor wrote:
But we really need to think hard about whether this is the right thing
to take into the tree. I worry a lot about the fact that we don't test
pre-copy migration nearly enough and adding a second form just
introduces more things to test.
It is an issue but it
Avi Kivity wrote:
> That's true. But some applications do require low latency, and the
> current code can impose a lot of time with the mmu spinlock held.
>
> The total amount of work actually increases slightly, from O(N) to O(N
> log N), but since the tree is so wide, the overhead is small.
>
(2011/11/30 14:02), Takuya Yoshikawa wrote:
IIUC, even though O(1) is O(1) at the timing of GET DIRTY LOG, it needs O(N)
write
protections with respect to the total number of dirty pages: distributed, but
actually each page fault, which should be logged, does some write protection?
Sorry
CCing qemu devel, Juan,
(2011/11/29 23:03), Avi Kivity wrote:
On 11/29/2011 02:01 PM, Avi Kivity wrote:
On 11/29/2011 01:56 PM, Xiao Guangrong wrote:
On 11/29/2011 07:20 PM, Avi Kivity wrote:
We used to have a bitmap in a shadow page with a bit set for every slot
pointed to by the page. If
Adding qemu-devel to Cc.
(2011/11/14 21:39), Avi Kivity wrote:
On 11/14/2011 12:56 PM, Takuya Yoshikawa wrote:
(2011/11/14 19:25), Avi Kivity wrote:
On 11/14/2011 11:20 AM, Takuya Yoshikawa wrote:
This is a revised version of my previous work. I hope that
the patches are more self
Adding qemu-devel ML to CC.
Your question should have been sent to qemu-devel ML because the logic
is implemented in QEMU, not KVM.
(2011/11/11 1:35), Oliver Hookins wrote:
Hi,
I am performing some benchmarks on KVM migration on two different types of VM.
One has 4GB RAM and the other 32GB. Mo
Vivek Goyal wrote:
> So you are using both RHEL 6.0 in both host and guest kernel? Can you
> reproduce the same issue with upstream kernels? How easily/frequently
> you can reproduce this with RHEL6.0 host.
Guests were CentOS6.0.
I have only RHEL6.0 and RHEL6.1 test results now.
I want to try s
68S / del_from_rr
--
Takuya Yoshikawa
> >> What kind of mmio should be traced here, device or CPU originated? Or both?
> >>
> >> Jan
> >>
> >>
> >
> > To let Kemari replay outputs upon failover, tracing CPU originated
> > mmio (specifically write requests) should be enough.
> > IIUC, we can reproduce device originated mmio as a resul
Thanks for the answers Avi, Juan,
Some FYI, (not about the bottleneck)
On Wed, 01 Dec 2010 14:35:57 +0200
Avi Kivity wrote:
> > > - how many dirty pages do we have to care?
> >
> > default values and assuming 1Gigabit ethernet for ourselves ~9.5MB of
> > dirty pages to have only 30ms of downt
(2010/11/30 1:41), Dor Laor wrote:
Is this a fair summary: any device that supports live migration workw
under Kemari?
It might be fair summary but practically we barely have live migration working
w/o Kemari. In addition, last I checked Kemari needs additional hooks and it
will be too hard
On Wed, 01 Dec 2010 02:52:08 +0100
Juan Quintela wrote:
> > Since we are planning to do some profiling for these, taking into account
> > Kemari, can you please share these information?
>
> If you see the 0/10 email with this setup, you can see how much time are
> we spending on stuff. Just now
#x27;s what the patch set I was alluding to did. Or maybe I imagined
> the whole thing.
>
> >>> We also need to implement live migration in a separate thread that
> >>> doesn't carry qemu_mutex while it runs.
> >>
> >> IMO that's the biggest hit currently.
> >
> > Yup. That's the Correct solution to the problem.
>
> Then let's just Do it.
>
> --
> error compiling committee.c: too many arguments to function
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Takuya Yoshikawa
(2010/04/22 19:35), Yoshiaki Tamura wrote:
A trivial one would we to :
- do X online snapshots/sec
I currently don't have good numbers that I can share right now.
Snapshots/sec depends on what kind of workload is running, and if the
guest was almost idle, there will be no snapshots in 5sec.
21 matches
Mail list logo