On 20.06.2012, at 01:52, Benjamin Herrenschmidt wrote:
> On Wed, 2012-06-20 at 01:30 +0200, Alexander Graf wrote:
>>> We support the paravirtualized -M pseries in full emu as well, in which
>>> case the hashed page table is handled by qemu itself who implements the
>>> H_ENTER & co hypercalls. So
On Wed, 2012-06-20 at 01:30 +0200, Alexander Graf wrote:
> > We support the paravirtualized -M pseries in full emu as well, in which
> > case the hashed page table is handled by qemu itself who implements the
> > H_ENTER & co hypercalls. So it's very similar, except that qemu doesn't
> > have to as
On 20.06.2012, at 01:28, Benjamin Herrenschmidt wrote:
> On Wed, 2012-06-20 at 01:11 +0200, Juan Quintela wrote:
>>
>>> I am confident I can come up with something as far as the kernel and
>>> qemu <-> kernel interface goes. I need to get my head around the details
>>> on how to implement that t
On Wed, 2012-06-20 at 01:11 +0200, Juan Quintela wrote:
>
> > I am confident I can come up with something as far as the kernel and
> > qemu <-> kernel interface goes. I need to get my head around the details
> > on how to implement that two stage save process in qemu though and the
> > correspondi
Benjamin Herrenschmidt wrote:
> On Wed, 2012-06-20 at 00:55 +0200, Juan Quintela wrote:
>>
>> This was going to be my question.
>>
>> If we can do something like: send hash register, and get a bitmap of
>> the
>> ones that get changed, we should be good. Perhaps we need something
>> "interestin
On Wed, 2012-06-20 at 00:55 +0200, Juan Quintela wrote:
>
> This was going to be my question.
>
> If we can do something like: send hash register, and get a bitmap of
> the
> ones that get changed, we should be good. Perhaps we need something
> "interesting" like removing old entries (no clue if
Alexander Graf wrote:
> On 19.06.2012, at 22:30, Benjamin Herrenschmidt wrote:
>
>> On Tue, 2012-06-19 at 16:59 +0200, Juan Quintela wrote:
> - The hash table (mentioned above). This is just a big chunk of
>>> memory
> (it will routinely be 16M), so I really don't want to start
>>> iterati
On Wed, 2012-06-20 at 00:27 +0200, Alexander Graf wrote:
>
> > I need to understand better how do that vs. qemu save/restore
> though. IE. That means
> > we can't just save the hash as a bulk and reload it, but we'd have
> to save bits of
> > it at a time or something like that no ? Or do we save
On 19.06.2012, at 23:51, Benjamin Herrenschmidt wrote:
> On Tue, 2012-06-19 at 23:48 +0200, Alexander Graf wrote:
>>> We could keep track manually maybe using some kind of dirty bitmap of
>>> changes to the hash table but that would add overhead to things like
>>> H_ENTER.
>>
>> Only during migr
On Tue, 2012-06-19 at 23:48 +0200, Alexander Graf wrote:
> > We could keep track manually maybe using some kind of dirty bitmap of
> > changes to the hash table but that would add overhead to things like
> > H_ENTER.
>
> Only during migration, right?
True. It will be an "interesting" user/kernel
On 19.06.2012, at 23:13, Benjamin Herrenschmidt
wrote:
> On Tue, 2012-06-19 at 23:00 +0200, Alexander Graf wrote:
>> How is the problem different from RAM? It's a 16MB region that can be
>> accessed by the guest even during transfer time, so it can get dirty
>> during the migration. But we onl
On Tue, 2012-06-19 at 23:00 +0200, Alexander Graf wrote:
> How is the problem different from RAM? It's a 16MB region that can be
> accessed by the guest even during transfer time, so it can get dirty
> during the migration. But we only need to really transfer the last
> small delta at the end of th
On 19.06.2012, at 22:30, Benjamin Herrenschmidt wrote:
> On Tue, 2012-06-19 at 16:59 +0200, Juan Quintela wrote:
- The hash table (mentioned above). This is just a big chunk of
>> memory
(it will routinely be 16M), so I really don't want to start
>> iterating
all elements, just a b
On Tue, 2012-06-19 at 16:59 +0200, Juan Quintela wrote:
> >> - The hash table (mentioned above). This is just a big chunk of
> memory
> >> (it will routinely be 16M), so I really don't want to start
> iterating
> >> all elements, just a bulk load will do, and the size might actually
> be
> >> varia
Juan,
Am 19.06.2012 16:59, schrieb Juan Quintela:
> Alexander Graf wrote:
>> On 09.06.2012, at 13:34, Benjamin Herrenschmidt wrote:
>>
>>> Ok, so I'm told there are patches to convert ppc, I haven't seen them in
>>> my list archives, so if somebody has a pointer, please shoot, that will
>>> save
Alexander Graf wrote:
> On 09.06.2012, at 13:34, Benjamin Herrenschmidt wrote:
>
>> On Sat, 2012-06-09 at 20:53 +1000, Benjamin Herrenschmidt wrote:
>>> Hi folks !
>>
>> (After some discussion with Andreas ...)
>>
>>> I'm looking at sorting out the state save/restore of target-ppc (which
>>> mea
On 09.06.2012, at 13:34, Benjamin Herrenschmidt wrote:
> On Sat, 2012-06-09 at 20:53 +1000, Benjamin Herrenschmidt wrote:
>> Hi folks !
>
> (After some discussion with Andreas ...)
>
>> I'm looking at sorting out the state save/restore of target-ppc (which
>> means understanding in general how
Hi,
Am 09.06.2012 13:34, schrieb Benjamin Herrenschmidt:
> On Sat, 2012-06-09 at 20:53 +1000, Benjamin Herrenschmidt wrote:
> (After some discussion with Andreas ...)
>
>> I'm looking at sorting out the state save/restore of target-ppc (which
>> means understanding in general how it works in qemu
On Sat, 2012-06-09 at 20:53 +1000, Benjamin Herrenschmidt wrote:
> Hi folks !
(After some discussion with Andreas ...)
> I'm looking at sorting out the state save/restore of target-ppc (which
> means understanding in general how it works in qemu :-)
>
> So far I've somewhat figured out that ther
Hi folks !
I'm looking at sorting out the state save/restore of target-ppc (which
means understanding in general how it works in qemu :-)
So far I've somewhat figured out that there's the "old way" where we
just provide a "bulk" save/restore function pair, and the "new way"
where we have nicely t
20 matches
Mail list logo