Juan Quintela writes:
[...]
> I have think a little bit about hotplug & migration, and haven't arraive
> to a nice solution.
>
> - Disabling hotplug/unplug during migration: easy to do. But it is not
> exactly user friendly (we are here).
>
> - Allowing hotplug during migration. Solutions:
>
On 07/15/2011 10:59 AM, Paolo Bonzini wrote:
On 07/14/2011 06:07 PM, Avi Kivity wrote:
Maybe we can do this via a magic subsection whose contents are the
hotplug event.
What about making the device list just another "thing" that has to be
migrated live, together with block and ram?
Excelle
On 07/15/2011 02:59 AM, Paolo Bonzini wrote:
On 07/14/2011 06:07 PM, Avi Kivity wrote:
Maybe we can do this via a magic subsection whose contents are the
hotplug event.
What about making the device list just another "thing" that has to be
migrated live, together with block and ram?
In an id
On 07/14/2011 06:07 PM, Avi Kivity wrote:
Maybe we can do this via a magic subsection whose contents are the
hotplug event.
What about making the device list just another "thing" that has to be
migrated live, together with block and ram?
Paolo
On 07/14/2011 07:32 AM, Avi Kivity wrote:
On 07/14/2011 03:30 PM, Anthony Liguori wrote:
Does this mean that the following code is sometimes executed without
qemu_mutex? I don't think any of it is thread safe.
That was my reaction too.
I think the most rational thing to do is have a separate
On 07/14/2011 07:49 PM, Anthony Liguori wrote:
I think a reference count based approach is really the only sane thing
to do and if we did that, it wouldn't be a problem since the reference
would be owned by the I/O thread and would live until the migration
thread is done with the VA.
I wa
On 07/14/2011 06:52 PM, Juan Quintela wrote:
>
>> Notice that hotplug/unplug during
>> migration don't make a lot of sense anyways.
>
> That's completely wrong. Hotplug is a guest/end-user operation;
> migration is a host/admin operation. The two don't talk to each other
> at all - if the
Avi Kivity wrote:
>> Disabling hotplug should be enough?
>
> So is powering down the destination host.
O:-) You see that I explained that later O:-)
>
>> Notice that hotplug/unplug during
>> migration don't make a lot of sense anyways.
>
> That's completely wrong. Hotplug is a guest/end-user
On 07/14/2011 06:30 PM, Juan Quintela wrote:
Avi Kivity wrote:
> On 07/14/2011 03:30 PM, Anthony Liguori wrote:
>>> Does this mean that the following code is sometimes executed without
>>> qemu_mutex? I don't think any of it is thread safe.
>>
>>
>> That was my reaction too.
>>
>> I think t
Avi Kivity wrote:
> On 07/14/2011 03:30 PM, Anthony Liguori wrote:
>>> Does this mean that the following code is sometimes executed without
>>> qemu_mutex? I don't think any of it is thread safe.
>>
>>
>> That was my reaction too.
>>
>> I think the most rational thing to do is have a separate thre
On 07/14/2011 03:30 PM, Anthony Liguori wrote:
Does this mean that the following code is sometimes executed without
qemu_mutex? I don't think any of it is thread safe.
That was my reaction too.
I think the most rational thing to do is have a separate thread and a
pair of producer/consumer qu
On 07/14/2011 03:36 AM, Avi Kivity wrote:
On 07/14/2011 10:14 AM, Umesh Deshpande wrote:
Following patch is implemented to deal with the VCPU and iothread
starvation during the migration of a guest. Currently iothread is
responsible for performing the migration. It holds the qemu_mutex
during th
On Thu, Jul 14, 2011 at 9:36 AM, Avi Kivity wrote:
> On 07/14/2011 10:14 AM, Umesh Deshpande wrote:
>> @@ -260,10 +260,15 @@ int ram_save_live(Monitor *mon, QEMUFile *f, int
>> stage, void *opaque)
>> return 0;
>> }
>>
>> + if (stage != 3)
>> + qemu_mutex_lock_iothread();
>
On 07/14/2011 10:14 AM, Umesh Deshpande wrote:
Following patch is implemented to deal with the VCPU and iothread starvation
during the migration of a guest. Currently iothread is responsible for
performing the migration. It holds the qemu_mutex during the migration and
doesn't allow VCPU to en
Following patch is implemented to deal with the VCPU and iothread starvation
during the migration of a guest. Currently iothread is responsible for
performing the migration. It holds the qemu_mutex during the migration and
doesn't allow VCPU to enter the qemu mode and delays its return to the gu
Following patch is implemented to deal with the VCPU and iothread starvation
during the migration of a guest. Currently iothread is responsible for
performing the migration. It holds the qemu_mutex during the migration and
doesn't allow VCPU to enter the qemu mode and delays its return to the gu
16 matches
Mail list logo