On 05/18/2010 07:58 PM, Cam Macdonell wrote:
My question is how to I unregister the physical memory so it is not
copied on migration (for the role=peer case). There isn't a
cpu_unregister_physical_memory().
It doesn't need to be unregistered, simply marked not migratable.
Perhaps a flag
On Mon, May 10, 2010 at 10:52 AM, Anthony Liguori wrote:
>> Yes, I think the ack is the way to go, so the guest has to be aware of
>> it. Would setting a flag in the driver-specific config space be an
>> acceptable ack that the shared region is now mapped?
>>
>
> You know it's mapped because it's
On 05/14/2010 12:10 AM, Cam Macdonell wrote:
On Mon, May 10, 2010 at 5:59 AM, Avi Kivity wrote:
On 04/21/2010 08:53 PM, Cam Macdonell wrote:
+
+/* allocate/initialize space for interrupt handling */
+s->eventfds = qemu_mallocz(s->nr_alloc_guests * sizeof(int *));
On Mon, May 10, 2010 at 5:59 AM, Avi Kivity wrote:
> On 04/21/2010 08:53 PM, Cam Macdonell wrote:
>> +
>> + /* allocate/initialize space for interrupt handling */
>> + s->eventfds = qemu_mallocz(s->nr_alloc_guests * sizeof(int *));
>> + s->eventfd_table = qemu_mallocz(s->vect
On 05/12/2010 07:14 PM, Cam Macdonell wrote:
Why can't we complete initialization before exposing the card and BAR?
Seems to be the simplest solution.
Looking at it more closely, you're right, the fds for shared
memory/eventfds are received in a fraction of a second, so that's why
I ha
On Wed, May 12, 2010 at 9:49 AM, Avi Kivity wrote:
> On 05/10/2010 07:48 PM, Cam Macdonell wrote:
>>
>> On Mon, May 10, 2010 at 10:40 AM, Avi Kivity wrote:
>>
>>>
>>> On 05/10/2010 06:41 PM, Cam Macdonell wrote:
>>>
>
> What would happen to any data written to the BAR before the
On 05/10/2010 07:48 PM, Cam Macdonell wrote:
On Mon, May 10, 2010 at 10:40 AM, Avi Kivity wrote:
On 05/10/2010 06:41 PM, Cam Macdonell wrote:
What would happen to any data written to the BAR before the the handshake
completed? I think it would disappear.
But, the
On 05/12/2010 06:32 PM, Cam Macdonell wrote:
We can tunnel its migration data through qemu. Of course, gathering its
dirty bitmap will be interesting. DSM may be the way to go here (we can
even live migrate qemu through DSM: share the guest address space and
immediately start running on the d
On Tue, May 11, 2010 at 12:13 PM, Avi Kivity wrote:
> On 05/11/2010 08:05 PM, Anthony Liguori wrote:
>>
>> On 05/11/2010 11:39 AM, Cam Macdonell wrote:
>>>
>>> Most of the people I hear from who are using my patch are using a peer
>>> model to share data between applications (simulations, JVMs, et
On 05/11/2010 11:39 AM, Cam Macdonell wrote:
Most of the people I hear from who are using my patch are using a peer
model to share data between applications (simulations, JVMs, etc).
But guest-to-host applications work as well of course.
I think "transparent migration" can be achieved by making
On 05/11/2010 08:05 PM, Anthony Liguori wrote:
On 05/11/2010 11:39 AM, Cam Macdonell wrote:
Most of the people I hear from who are using my patch are using a peer
model to share data between applications (simulations, JVMs, etc).
But guest-to-host applications work as well of course.
I think "
On 05/11/2010 06:51 PM, Anthony Liguori wrote:
On 05/11/2010 09:53 AM, Avi Kivity wrote:
On 05/11/2010 05:17 PM, Cam Macdonell wrote:
The master is the shared memory area. It's a completely separate
entity
that is represented by the backing file (or shared memory server
handing out
the fd
On Tue, May 11, 2010 at 11:05 AM, Anthony Liguori wrote:
> On 05/11/2010 11:39 AM, Cam Macdonell wrote:
>>
>> Most of the people I hear from who are using my patch are using a peer
>> model to share data between applications (simulations, JVMs, etc).
>> But guest-to-host applications work as well
On Tue, May 11, 2010 at 9:51 AM, Anthony Liguori wrote:
> On 05/11/2010 09:53 AM, Avi Kivity wrote:
>>
>> On 05/11/2010 05:17 PM, Cam Macdonell wrote:
>>>
The master is the shared memory area. It's a completely separate entity
that is represented by the backing file (or shared memory se
On 05/11/2010 09:53 AM, Avi Kivity wrote:
On 05/11/2010 05:17 PM, Cam Macdonell wrote:
The master is the shared memory area. It's a completely separate
entity
that is represented by the backing file (or shared memory server
handing out
the fd to mmap). It can exists independently of any gu
On 05/11/2010 05:17 PM, Cam Macdonell wrote:
The master is the shared memory area. It's a completely separate entity
that is represented by the backing file (or shared memory server handing out
the fd to mmap). It can exists independently of any guest.
I think the master/peer idea woul
On Tue, May 11, 2010 at 8:03 AM, Avi Kivity wrote:
> On 05/11/2010 04:10 PM, Anthony Liguori wrote:
>>
>> On 05/11/2010 02:59 AM, Avi Kivity wrote:
(Replying again to list)
What data structure would you use? For a lockless ring queue, you can
only support a single produce
On 05/11/2010 04:10 PM, Anthony Liguori wrote:
On 05/11/2010 02:59 AM, Avi Kivity wrote:
(Replying again to list)
What data structure would you use? For a lockless ring queue, you
can only support a single producer and consumer. To achieve
bidirectional communication in virtio, we always us
On 05/11/2010 02:59 AM, Avi Kivity wrote:
(Replying again to list)
What data structure would you use? For a lockless ring queue, you
can only support a single producer and consumer. To achieve
bidirectional communication in virtio, we always use two queues.
You don't have to use a lockles
On 05/11/2010 02:17 AM, Cam Macdonell wrote:
On Mon, May 10, 2010 at 5:59 AM, Avi Kivity wrote:
On 04/21/2010 08:53 PM, Cam Macdonell wrote:
Support an inter-vm shared memory device that maps a shared-memory object
as a
PCI device in the guest. This patch also supports interrupts be
On 05/10/2010 08:25 PM, Anthony Liguori wrote:
On 05/10/2010 11:59 AM, Avi Kivity wrote:
On 05/10/2010 06:38 PM, Anthony Liguori wrote:
Otherwise, if the BAR is allocated during initialization, I would
have
to use MAP_FIXED to mmap the memory. This is what I did before the
qemu_ram_mmap() f
On 05/10/2010 08:52 PM, Anthony Liguori wrote:
Why try to attempt to support multi-master shared memory? What's the
use-case?
I don't see it as multi-master, but that the latest guest to join
shouldn't have their contents take precedence. In developing this
patch, my motivation has been to let
On Mon, May 10, 2010 at 5:59 AM, Avi Kivity wrote:
> On 04/21/2010 08:53 PM, Cam Macdonell wrote:
>>
>> Support an inter-vm shared memory device that maps a shared-memory object
>> as a
>> PCI device in the guest. This patch also supports interrupts between
>> guest by
>> communicating over a uni
On Mon, May 10, 2010 at 11:52 AM, Anthony Liguori wrote:
> On 05/10/2010 12:43 PM, Cam Macdonell wrote:
>>
>> On Mon, May 10, 2010 at 11:25 AM, Anthony Liguori
>> wrote:
>>
>>>
>>> On 05/10/2010 11:59 AM, Avi Kivity wrote:
>>>
On 05/10/2010 06:38 PM, Anthony Liguori wrote:
>
>>
On 05/10/2010 12:43 PM, Cam Macdonell wrote:
On Mon, May 10, 2010 at 11:25 AM, Anthony Liguori wrote:
On 05/10/2010 11:59 AM, Avi Kivity wrote:
On 05/10/2010 06:38 PM, Anthony Liguori wrote:
Otherwise, if the BAR is allocated during initialization, I would have
to
On Mon, May 10, 2010 at 11:25 AM, Anthony Liguori wrote:
> On 05/10/2010 11:59 AM, Avi Kivity wrote:
>>
>> On 05/10/2010 06:38 PM, Anthony Liguori wrote:
>>>
> Otherwise, if the BAR is allocated during initialization, I would have
> to use MAP_FIXED to mmap the memory. This is what I did
On 05/10/2010 11:59 AM, Avi Kivity wrote:
On 05/10/2010 06:38 PM, Anthony Liguori wrote:
Otherwise, if the BAR is allocated during initialization, I would have
to use MAP_FIXED to mmap the memory. This is what I did before the
qemu_ram_mmap() function was added.
What would happen to any dat
On 05/10/2010 06:38 PM, Anthony Liguori wrote:
Otherwise, if the BAR is allocated during initialization, I would have
to use MAP_FIXED to mmap the memory. This is what I did before the
qemu_ram_mmap() function was added.
What would happen to any data written to the BAR before the the
handsh
On 05/10/2010 11:20 AM, Cam Macdonell wrote:
On Mon, May 10, 2010 at 9:38 AM, Anthony Liguori wrote:
On 05/10/2010 10:28 AM, Avi Kivity wrote:
On 05/10/2010 06:22 PM, Cam Macdonell wrote:
+
+/* if the position is -1, then it's shared memory region f
On Mon, May 10, 2010 at 10:40 AM, Avi Kivity wrote:
> On 05/10/2010 06:41 PM, Cam Macdonell wrote:
>>
>>> What would happen to any data written to the BAR before the the handshake
>>> completed? I think it would disappear.
>>>
>>
>> But, the BAR isn't there until the handshake is completed. Only
On 05/10/2010 06:41 PM, Cam Macdonell wrote:
What would happen to any data written to the BAR before the the handshake
completed? I think it would disappear.
But, the BAR isn't there until the handshake is completed. Only after
receiving the shared memory fd does my device call pci_reg
On Mon, May 10, 2010 at 9:38 AM, Anthony Liguori wrote:
> On 05/10/2010 10:28 AM, Avi Kivity wrote:
>>
>> On 05/10/2010 06:22 PM, Cam Macdonell wrote:
>>>
> +
> + /* if the position is -1, then it's shared memory region fd */
> + if (incoming_posn == -1) {
> +
> +
On Mon, May 10, 2010 at 9:28 AM, Avi Kivity wrote:
> On 05/10/2010 06:22 PM, Cam Macdonell wrote:
>>
>>>
+
+ /* if the position is -1, then it's shared memory region fd */
+ if (incoming_posn == -1) {
+
+ s->num_eventfds = 0;
+
+ if (check_shm
On 05/10/2010 10:28 AM, Avi Kivity wrote:
On 05/10/2010 06:22 PM, Cam Macdonell wrote:
+
+/* if the position is -1, then it's shared memory region fd */
+if (incoming_posn == -1) {
+
+s->num_eventfds = 0;
+
+if (check_shm_size(s, incoming_fd) == -1) {
+exi
On 05/10/2010 06:22 PM, Cam Macdonell wrote:
+
+/* if the position is -1, then it's shared memory region fd */
+if (incoming_posn == -1) {
+
+s->num_eventfds = 0;
+
+if (check_shm_size(s, incoming_fd) == -1) {
+exit(-1);
+}
+
+/* creating a
On Mon, May 10, 2010 at 5:59 AM, Avi Kivity wrote:
> On 04/21/2010 08:53 PM, Cam Macdonell wrote:
>>
>> Support an inter-vm shared memory device that maps a shared-memory object
>> as a
>> PCI device in the guest. This patch also supports interrupts between
>> guest by
>> communicating over a uni
On 04/21/2010 08:53 PM, Cam Macdonell wrote:
Support an inter-vm shared memory device that maps a shared-memory object as a
PCI device in the guest. This patch also supports interrupts between guest by
communicating over a unix domain socket. This patch applies to the qemu-kvm
repository.
On Thu, May 6, 2010 at 11:32 AM, Anthony Liguori wrote:
> On 04/21/2010 12:53 PM, Cam Macdonell wrote:
>>
>> Support an inter-vm shared memory device that maps a shared-memory object
>> as a
>> PCI device in the guest. This patch also supports interrupts between
>> guest by
>> communicating over
On 04/21/2010 12:53 PM, Cam Macdonell wrote:
Support an inter-vm shared memory device that maps a shared-memory object as a
PCI device in the guest. This patch also supports interrupts between guest by
communicating over a unix domain socket. This patch applies to the qemu-kvm
repository.
39 matches
Mail list logo