>>> On 23.07.15 at 12:04, wrote:
> On Thu, 23 Jul 2015, Jan Beulich wrote:
>> >>> On 22.07.15 at 19:24, wrote:
>> > I'll queue this change up for the next QEMU release cycle.
>>
>> Thanks - v2 (with the adjusted description) just sent.
>>
>> It would however be nice for our variant in 4.6 to al
On Thu, 23 Jul 2015, Stefano Stabellini wrote:
> On Thu, 23 Jul 2015, Jan Beulich wrote:
> > >>> On 22.07.15 at 19:24, wrote:
> > > I'll queue this change up for the next QEMU release cycle.
> >
> > Thanks - v2 (with the adjusted description) just sent.
> >
> > It would however be nice for our v
On Thu, 23 Jul 2015, Jan Beulich wrote:
> >>> On 22.07.15 at 19:24, wrote:
> > I'll queue this change up for the next QEMU release cycle.
>
> Thanks - v2 (with the adjusted description) just sent.
>
> It would however be nice for our variant in 4.6 to also gain this,
> perhaps independent of ups
>>> On 22.07.15 at 19:24, wrote:
> I'll queue this change up for the next QEMU release cycle.
Thanks - v2 (with the adjusted description) just sent.
It would however be nice for our variant in 4.6 to also gain this,
perhaps independent of upstream's schedule.
Jan
On Wed, 22 Jul 2015, Stefano Stabellini wrote:
> On Wed, 22 Jul 2015, Jan Beulich wrote:
> > >>> On 22.07.15 at 16:50, wrote:
> > > On Wed, 22 Jul 2015, Jan Beulich wrote:
> > >> >> --- a/xen-hvm.c
> > >> >> +++ b/xen-hvm.c
> > >> >> @@ -981,19 +981,30 @@ static void handle_ioreq(XenIOState *sta
>
On Wed, 22 Jul 2015, Jan Beulich wrote:
> >>> On 22.07.15 at 16:50, wrote:
> > On Wed, 22 Jul 2015, Jan Beulich wrote:
> >> >> --- a/xen-hvm.c
> >> >> +++ b/xen-hvm.c
> >> >> @@ -981,19 +981,30 @@ static void handle_ioreq(XenIOState *sta
> >> >>
> >> >> static int handle_buffered_iopage(XenIOSt
>>> On 22.07.15 at 16:50, wrote:
> On Wed, 22 Jul 2015, Jan Beulich wrote:
>> >> --- a/xen-hvm.c
>> >> +++ b/xen-hvm.c
>> >> @@ -981,19 +981,30 @@ static void handle_ioreq(XenIOState *sta
>> >>
>> >> static int handle_buffered_iopage(XenIOState *state)
>> >> {
>> >> +buffered_iopage_t *buf
On Wed, 22 Jul 2015, Jan Beulich wrote:
> >> The number of slots per page being 511 (i.e. not a power of two) means
> >> that the (32-bit) read and write indexes going beyond 2^32 will likely
> >> disturb operation. The hypervisor side gets I/O req server creation
> >> extended so we can indicate t
>>> On 21.07.15 at 15:54, wrote:
> On Thu, 18 Jun 2015, Jan Beulich wrote:
>> The number of slots per page being 511 (i.e. not a power of two) means
>> that the (32-bit) read and write indexes going beyond 2^32 will likely
>> disturb operation. The hypervisor side gets I/O req server creation
>> e
On Thu, 18 Jun 2015, Jan Beulich wrote:
> The number of slots per page being 511 (i.e. not a power of two) means
> that the (32-bit) read and write indexes going beyond 2^32 will likely
> disturb operation. The hypervisor side gets I/O req server creation
> extended so we can indicate that we're us
The number of slots per page being 511 (i.e. not a power of two) means
that the (32-bit) read and write indexes going beyond 2^32 will likely
disturb operation. The hypervisor side gets I/O req server creation
extended so we can indicate that we're using suitable atomic accesses
where needed (not a
11 matches
Mail list logo