Re: [Qemu-devel] [PATCH] KVM: MMU: lazily drop large spte

2012-11-13 Thread Takuya Yoshikawa
Ccing live migration developers who should be interested in this work, On Mon, 12 Nov 2012 21:10:32 -0200 Marcelo Tosatti wrote: > On Mon, Nov 05, 2012 at 05:59:26PM +0800, Xiao Guangrong wrote: > > Do not drop large spte until it can be insteaded by small pages so that > > the guest can happliy

Re: [Qemu-devel] KVM call agenda for Tuesday, June 19th

2012-06-19 Thread Takuya Yoshikawa
On Tue, 19 Jun 2012 09:01:36 -0500 Anthony Liguori wrote: > I'm not at all convinced that postcopy is a good idea. There needs a clear > expression of what the value proposition is that's backed by benchmarks. > Those > benchmarks need to include latency measurements of downtime which so far

Re: [Qemu-devel] Heavy memory_region_get_dirty() -- Re: [PATCH 0/1 v2] KVM: Alleviate mmu_lock contention during dirty logging

2012-05-02 Thread Takuya Yoshikawa
On Wed, 02 May 2012 14:33:55 +0300 Avi Kivity wrote: > > = > > perf top -t ${QEMU_TID} > > = > > 51.52% qemu-system-x86_64 [.] memory_region_get_dirty > > 16.73% qemu-system-x86_64 [.] ram_save_remaining > > > > memory_regio

[Qemu-devel] Heavy memory_region_get_dirty() -- Re: [PATCH 0/1 v2] KVM: Alleviate mmu_lock contention during dirty logging

2012-05-02 Thread Takuya Yoshikawa
quire On Sat, 28 Apr 2012 19:05:44 +0900 Takuya Yoshikawa wrote: > 1. Problem > During live migration, if the guest tries to take mmu_lock at the same > time as GET_DIRTY_LOG, which is called periodically by QEMU, it may be > forced to wait long time; this is not restrict

Re: [Qemu-devel] [RFC] Next gen kvm api

2012-02-11 Thread Takuya Yoshikawa
Avi Kivity wrote: > > > Slot searching is quite fast since there's a small number of slots, and > > > we sort the larger ones to be in the front, so positive lookups are fast. > > > We cache negative lookups in the shadow page tables (an spte can be > > > either "not mapped", "mapped to RAM"

Re: [Qemu-devel] [RFC] Next gen kvm api

2012-02-03 Thread Takuya Yoshikawa
Hope to get comments from live migration developers, Anthony Liguori wrote: > > Guest memory management > > --- > > Instead of managing each memory slot individually, a single API will be > > provided that replaces the entire guest physical memory map atomically. > > This mat

Re: [Qemu-devel] [PATCH 0/2][RFC] postcopy migration: Linux char device for postcopy

2012-01-12 Thread Takuya Yoshikawa
(2012/01/13 10:09), Benoit Hudzia wrote: Hi, Sorry to jump to hijack the thread like that , however i would like to just to inform you that we recently achieve a milestone out of the research project I'm leading. We enhanced KVM in order to deliver post copy live migration using RDMA at kernel

Re: [Qemu-devel] [PATCH 00/21][RFC] postcopy live migration

2012-01-03 Thread Takuya Yoshikawa
(2012/01/01 18:52), Dor Laor wrote: But we really need to think hard about whether this is the right thing to take into the tree. I worry a lot about the fact that we don't test pre-copy migration nearly enough and adding a second form just introduces more things to test. It is an issue but it

Re: [Qemu-devel] [PATCH 0/4] KVM: Dirty logging optimization using rmap

2011-12-02 Thread Takuya Yoshikawa
Avi Kivity wrote: > That's true. But some applications do require low latency, and the > current code can impose a lot of time with the mmu spinlock held. > > The total amount of work actually increases slightly, from O(N) to O(N > log N), but since the tree is so wide, the overhead is small. >

Re: [Qemu-devel] [PATCH 0/4] KVM: Dirty logging optimization using rmap

2011-11-29 Thread Takuya Yoshikawa
(2011/11/30 14:02), Takuya Yoshikawa wrote: IIUC, even though O(1) is O(1) at the timing of GET DIRTY LOG, it needs O(N) write protections with respect to the total number of dirty pages: distributed, but actually each page fault, which should be logged, does some write protection? Sorry

Re: [Qemu-devel] [PATCH 0/4] KVM: Dirty logging optimization using rmap

2011-11-29 Thread Takuya Yoshikawa
CCing qemu devel, Juan, (2011/11/29 23:03), Avi Kivity wrote: On 11/29/2011 02:01 PM, Avi Kivity wrote: On 11/29/2011 01:56 PM, Xiao Guangrong wrote: On 11/29/2011 07:20 PM, Avi Kivity wrote: We used to have a bitmap in a shadow page with a bit set for every slot pointed to by the page. If

Re: [Qemu-devel] [PATCH 0/4] KVM: Dirty logging optimization using rmap

2011-11-16 Thread Takuya Yoshikawa
Adding qemu-devel to Cc. (2011/11/14 21:39), Avi Kivity wrote: On 11/14/2011 12:56 PM, Takuya Yoshikawa wrote: (2011/11/14 19:25), Avi Kivity wrote: On 11/14/2011 11:20 AM, Takuya Yoshikawa wrote: This is a revised version of my previous work. I hope that the patches are more self

Re: [Qemu-devel] Memory sync algorithm during migration

2011-11-15 Thread Takuya Yoshikawa
Adding qemu-devel ML to CC. Your question should have been sent to qemu-devel ML because the logic is implemented in QEMU, not KVM. (2011/11/11 1:35), Oliver Hookins wrote: Hi, I am performing some benchmarks on KVM migration on two different types of VM. One has 4GB RAM and the other 32GB. Mo

Re: [Qemu-devel] CFQ I/O starvation problem triggered by RHEL6.0 KVM guests

2011-09-09 Thread Takuya Yoshikawa
Vivek Goyal wrote: > So you are using both RHEL 6.0 in both host and guest kernel? Can you > reproduce the same issue with upstream kernels? How easily/frequently > you can reproduce this with RHEL6.0 host. Guests were CentOS6.0. I have only RHEL6.0 and RHEL6.1 test results now. I want to try s

[Qemu-devel] CFQ I/O starvation problem triggered by RHEL6.0 KVM guests

2011-09-08 Thread Takuya Yoshikawa
68S / del_from_rr -- Takuya Yoshikawa

Re: [Qemu-devel] [PATCH 12/18] Insert event_tap_mmio() to cpu_physical_memory_rw() in exec.c.

2011-04-27 Thread Takuya Yoshikawa
> >> What kind of mmio should be traced here, device or CPU originated? Or both? > >> > >> Jan > >> > >> > > > > To let Kemari replay outputs upon failover, tracing CPU originated > > mmio (specifically write requests) should be enough. > > IIUC, we can reproduce device originated mmio as a resul

[Qemu-devel] Re: [PATCH 09/10] Exit loop if we have been there too long

2010-12-01 Thread Takuya Yoshikawa
Thanks for the answers Avi, Juan, Some FYI, (not about the bottleneck) On Wed, 01 Dec 2010 14:35:57 +0200 Avi Kivity wrote: > > > - how many dirty pages do we have to care? > > > > default values and assuming 1Gigabit ethernet for ourselves ~9.5MB of > > dirty pages to have only 30ms of downt

Re: [Qemu-devel] [PATCH 00/21] Kemari for KVM 0.2

2010-12-01 Thread Takuya Yoshikawa
(2010/11/30 1:41), Dor Laor wrote: Is this a fair summary: any device that supports live migration workw under Kemari? It might be fair summary but practically we barely have live migration working w/o Kemari. In addition, last I checked Kemari needs additional hooks and it will be too hard

[Qemu-devel] Re: [PATCH 09/10] Exit loop if we have been there too long

2010-11-30 Thread Takuya Yoshikawa
On Wed, 01 Dec 2010 02:52:08 +0100 Juan Quintela wrote: > > Since we are planning to do some profiling for these, taking into account > > Kemari, can you please share these information? > > If you see the 0/10 email with this setup, you can see how much time are > we spending on stuff. Just now

[Qemu-devel] Re: [PATCH 09/10] Exit loop if we have been there too long

2010-11-30 Thread Takuya Yoshikawa
#x27;s what the patch set I was alluding to did. Or maybe I imagined > the whole thing. > > >>> We also need to implement live migration in a separate thread that > >>> doesn't carry qemu_mutex while it runs. > >> > >> IMO that's the biggest hit currently. > > > > Yup. That's the Correct solution to the problem. > > Then let's just Do it. > > -- > error compiling committee.c: too many arguments to function > > -- > To unsubscribe from this list: send the line "unsubscribe kvm" in > the body of a message to majord...@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html -- Takuya Yoshikawa

Re: [Qemu-devel] [RFC PATCH 00/20] Kemari for KVM v0.1

2010-04-22 Thread Takuya Yoshikawa
(2010/04/22 19:35), Yoshiaki Tamura wrote: A trivial one would we to : - do X online snapshots/sec I currently don't have good numbers that I can share right now. Snapshots/sec depends on what kind of workload is running, and if the guest was almost idle, there will be no snapshots in 5sec.