On Mon, Nov 27, 2017 at 09:44:07PM -0500, Matthew Rosato wrote:
> On 11/27/2017 08:36 PM, Jason Wang wrote:
> >
> >
> > On 2017年11月28日 00:21, Wei Xu wrote:
> >> On Mon, Nov 20, 2017 at 02:25:17PM -0500, Matthew Rosato wrote:
> >>> On 11/14/2017 03:11 PM, Matthew Rosato wrote:
> On 11/12/2017
On Tue, Nov 28, 2017 at 09:36:37AM +0800, Jason Wang wrote:
>
>
> On 2017年11月28日 00:21, Wei Xu wrote:
> > On Mon, Nov 20, 2017 at 02:25:17PM -0500, Matthew Rosato wrote:
> > > On 11/14/2017 03:11 PM, Matthew Rosato wrote:
> > > > On 11/12/2017 01:34 PM, Wei Xu wrote:
> > > > > On Sat, Nov 11, 201
On 11/27/2017 08:36 PM, Jason Wang wrote:
>
>
> On 2017年11月28日 00:21, Wei Xu wrote:
>> On Mon, Nov 20, 2017 at 02:25:17PM -0500, Matthew Rosato wrote:
>>> On 11/14/2017 03:11 PM, Matthew Rosato wrote:
On 11/12/2017 01:34 PM, Wei Xu wrote:
> On Sat, Nov 11, 2017 at 03:59:54PM -0500, Matth
On 2017年11月28日 00:21, Wei Xu wrote:
On Mon, Nov 20, 2017 at 02:25:17PM -0500, Matthew Rosato wrote:
On 11/14/2017 03:11 PM, Matthew Rosato wrote:
On 11/12/2017 01:34 PM, Wei Xu wrote:
On Sat, Nov 11, 2017 at 03:59:54PM -0500, Matthew Rosato wrote:
This case should be quite similar with pkgt
On Mon, Nov 20, 2017 at 02:25:17PM -0500, Matthew Rosato wrote:
> On 11/14/2017 03:11 PM, Matthew Rosato wrote:
> > On 11/12/2017 01:34 PM, Wei Xu wrote:
> >> On Sat, Nov 11, 2017 at 03:59:54PM -0500, Matthew Rosato wrote:
> > This case should be quite similar with pkgten, if you got improvemen
On 11/14/2017 03:11 PM, Matthew Rosato wrote:
> On 11/12/2017 01:34 PM, Wei Xu wrote:
>> On Sat, Nov 11, 2017 at 03:59:54PM -0500, Matthew Rosato wrote:
> This case should be quite similar with pkgten, if you got improvement with
> pktgen, usually it was also the same for UDP, could you ple
On 11/12/2017 01:34 PM, Wei Xu wrote:
> On Sat, Nov 11, 2017 at 03:59:54PM -0500, Matthew Rosato wrote:
This case should be quite similar with pkgten, if you got improvement with
pktgen, usually it was also the same for UDP, could you please try to
disable
tso, gso, gro, ufo on
On Sat, Nov 11, 2017 at 03:59:54PM -0500, Matthew Rosato wrote:
> >> This case should be quite similar with pkgten, if you got improvement with
> >> pktgen, usually it was also the same for UDP, could you please try to
> >> disable
> >> tso, gso, gro, ufo on all host tap devices and guest virtio-n
On Tue, Nov 07, 2017 at 08:02:48PM -0500, Matthew Rosato wrote:
> On 11/04/2017 07:35 PM, Wei Xu wrote:
> > On Fri, Nov 03, 2017 at 12:30:12AM -0400, Matthew Rosato wrote:
> >> On 10/31/2017 03:07 AM, Wei Xu wrote:
> >>> On Thu, Oct 26, 2017 at 01:53:12PM -0400, Matthew Rosato wrote:
>
> >
>> This case should be quite similar with pkgten, if you got improvement with
>> pktgen, usually it was also the same for UDP, could you please try to disable
>> tso, gso, gro, ufo on all host tap devices and guest virtio-net devices?
>> Currently
>> the most significant tests would be like this A
On 11/04/2017 07:35 PM, Wei Xu wrote:
> On Fri, Nov 03, 2017 at 12:30:12AM -0400, Matthew Rosato wrote:
>> On 10/31/2017 03:07 AM, Wei Xu wrote:
>>> On Thu, Oct 26, 2017 at 01:53:12PM -0400, Matthew Rosato wrote:
>
> Are you using the same binding as mentioned in previous mail sent by
On Fri, Nov 03, 2017 at 12:30:12AM -0400, Matthew Rosato wrote:
> On 10/31/2017 03:07 AM, Wei Xu wrote:
> > On Thu, Oct 26, 2017 at 01:53:12PM -0400, Matthew Rosato wrote:
> >>
> >>>
> >>> Are you using the same binding as mentioned in previous mail sent by you?
> >>> it
> >>> might be caused by c
On 10/31/2017 03:07 AM, Wei Xu wrote:
> On Thu, Oct 26, 2017 at 01:53:12PM -0400, Matthew Rosato wrote:
>>
>>>
>>> Are you using the same binding as mentioned in previous mail sent by you? it
>>> might be caused by cpu convention between pktgen and vhost, could you please
>>> try to run pktgen from
On 2017年10月31日 15:07, Wei Xu wrote:
BTW, did you see any improvement when running pktgen from the host if no
regression was found? Since this can be reproduced with only 1 vcpu for
guest, may you try this bind? This might help simplify the problem.
vcpu0 -> cpu2
vhost -> cpu3
pktgen
On Thu, Oct 26, 2017 at 01:53:12PM -0400, Matthew Rosato wrote:
>
> >
> > Are you using the same binding as mentioned in previous mail sent by you? it
> > might be caused by cpu convention between pktgen and vhost, could you please
> > try to run pktgen from another idle cpu by adjusting the bind
>
> Are you using the same binding as mentioned in previous mail sent by you? it
> might be caused by cpu convention between pktgen and vhost, could you please
> try to run pktgen from another idle cpu by adjusting the binding?
I don't think that's the case -- I can cause pktgen to hang in the
On Wed, Oct 25, 2017 at 04:21:26PM -0400, Matthew Rosato wrote:
> On 10/22/2017 10:06 PM, Jason Wang wrote:
> >
> >
> > On 2017年10月19日 04:17, Matthew Rosato wrote:
> >>> 2. It might be useful to short the traffic path as a reference, What
> >>> I am running
> >>> is briefly like:
> >>> pktge
On 10/23/2017 09:57 AM, Wei Xu wrote:
> On Wed, Oct 18, 2017 at 04:17:51PM -0400, Matthew Rosato wrote:
>> On 10/12/2017 02:31 PM, Wei Xu wrote:
>>> On Thu, Oct 05, 2017 at 04:07:45PM -0400, Matthew Rosato wrote:
Ping... Jason, any other ideas or suggestions?
>>>
>>> Hi Matthew,
>>> Rece
On 10/22/2017 10:06 PM, Jason Wang wrote:
>
>
> On 2017年10月19日 04:17, Matthew Rosato wrote:
>>> 2. It might be useful to short the traffic path as a reference, What
>>> I am running
>>> is briefly like:
>>> pktgen(host kernel) -> tap(x) -> guest(DPDK testpmd)
>>>
>>> The bridge driver(br_for
On Wed, Oct 18, 2017 at 04:17:51PM -0400, Matthew Rosato wrote:
> On 10/12/2017 02:31 PM, Wei Xu wrote:
> > On Thu, Oct 05, 2017 at 04:07:45PM -0400, Matthew Rosato wrote:
> >>
> >> Ping... Jason, any other ideas or suggestions?
> >
> > Hi Matthew,
> > Recently I am doing similar test on x86 for
On Mon, Oct 23, 2017 at 10:06:36AM +0800, Jason Wang wrote:
>
>
> On 2017年10月19日 04:17, Matthew Rosato wrote:
> > > 2. It might be useful to short the traffic path as a reference, What I am
> > > running
> > > is briefly like:
> > > pktgen(host kernel) -> tap(x) -> guest(DPDK testpmd)
> > >
On 2017年10月19日 04:17, Matthew Rosato wrote:
2. It might be useful to short the traffic path as a reference, What I am
running
is briefly like:
pktgen(host kernel) -> tap(x) -> guest(DPDK testpmd)
The bridge driver(br_forward(), etc) might impact performance due to my personal
experience,
On 10/12/2017 02:31 PM, Wei Xu wrote:
> On Thu, Oct 05, 2017 at 04:07:45PM -0400, Matthew Rosato wrote:
>>
>> Ping... Jason, any other ideas or suggestions?
>
> Hi Matthew,
> Recently I am doing similar test on x86 for this patch, here are some,
> differences between our testbeds.
>
> 1. It is n
On Thu, Oct 05, 2017 at 04:07:45PM -0400, Matthew Rosato wrote:
>
> Ping... Jason, any other ideas or suggestions?
Hi Matthew,
Recently I am doing similar test on x86 for this patch, here are some,
differences between our testbeds.
1. It is nice you have got improvement with 50+ instances(or co
On 2017年10月06日 04:07, Matthew Rosato wrote:
On 09/25/2017 04:18 PM, Matthew Rosato wrote:
On 09/22/2017 12:03 AM, Jason Wang wrote:
On 2017年09月21日 03:38, Matthew Rosato wrote:
Seems to make some progress on wakeup mitigation. Previous patch tries
to reduce the unnecessary traversal of waitq
On 09/25/2017 04:18 PM, Matthew Rosato wrote:
> On 09/22/2017 12:03 AM, Jason Wang wrote:
>>
>>
>> On 2017年09月21日 03:38, Matthew Rosato wrote:
Seems to make some progress on wakeup mitigation. Previous patch tries
to reduce the unnecessary traversal of waitqueue during rx. Attached
p
On 09/22/2017 12:03 AM, Jason Wang wrote:
>
>
> On 2017年09月21日 03:38, Matthew Rosato wrote:
>>> Seems to make some progress on wakeup mitigation. Previous patch tries
>>> to reduce the unnecessary traversal of waitqueue during rx. Attached
>>> patch goes even further which disables rx polling dur
On 2017年09月21日 03:38, Matthew Rosato wrote:
Seems to make some progress on wakeup mitigation. Previous patch tries
to reduce the unnecessary traversal of waitqueue during rx. Attached
patch goes even further which disables rx polling during processing tx.
Please try it to see if it has any diff
> Seems to make some progress on wakeup mitigation. Previous patch tries
> to reduce the unnecessary traversal of waitqueue during rx. Attached
> patch goes even further which disables rx polling during processing tx.
> Please try it to see if it has any difference.
Unfortunately, this patch does
On 2017年09月19日 02:11, Matthew Rosato wrote:
On 09/18/2017 03:36 AM, Jason Wang wrote:
On 2017年09月18日 11:13, Jason Wang wrote:
On 2017年09月16日 03:19, Matthew Rosato wrote:
It looks like vhost is slowed down for some reason which leads to more
idle time on 4.13+VHOST_RX_BATCH=1. Appreciated i
On 09/18/2017 03:36 AM, Jason Wang wrote:
>
>
> On 2017年09月18日 11:13, Jason Wang wrote:
>>
>>
>> On 2017年09月16日 03:19, Matthew Rosato wrote:
It looks like vhost is slowed down for some reason which leads to more
idle time on 4.13+VHOST_RX_BATCH=1. Appreciated if you can collect the
On 2017年09月18日 11:13, Jason Wang wrote:
On 2017年09月16日 03:19, Matthew Rosato wrote:
It looks like vhost is slowed down for some reason which leads to more
idle time on 4.13+VHOST_RX_BATCH=1. Appreciated if you can collect the
perf.diff on host, one for rx and one for tx.
perf data below fo
On 2017年09月16日 03:19, Matthew Rosato wrote:
It looks like vhost is slowed down for some reason which leads to more
idle time on 4.13+VHOST_RX_BATCH=1. Appreciated if you can collect the
perf.diff on host, one for rx and one for tx.
perf data below for the associated vhost threads, baseline=4.
> It looks like vhost is slowed down for some reason which leads to more
> idle time on 4.13+VHOST_RX_BATCH=1. Appreciated if you can collect the
> perf.diff on host, one for rx and one for tx.
>
perf data below for the associated vhost threads, baseline=4.12,
delta1=4.13, delta2=4.13+VHOST_RX_BA
On 2017年09月15日 11:36, Matthew Rosato wrote:
Is the issue gone if you reduce VHOST_RX_BATCH to 1? And it would be
also helpful to collect perf diff to see if anything interesting.
(Consider 4.4 shows more obvious regression, please use 4.4).
Issue still exists when I force VHOST_RX_BATCH = 1
> Is the issue gone if you reduce VHOST_RX_BATCH to 1? And it would be
> also helpful to collect perf diff to see if anything interesting.
> (Consider 4.4 shows more obvious regression, please use 4.4).
>
Issue still exists when I force VHOST_RX_BATCH = 1
Collected perf data, with 4.12 as the b
On 2017年09月14日 00:59, Matthew Rosato wrote:
On 09/13/2017 04:13 AM, Jason Wang wrote:
On 2017年09月13日 09:16, Jason Wang wrote:
On 2017年09月13日 01:56, Matthew Rosato wrote:
We are seeing a regression for a subset of workloads across KVM guests
over a virtual bridge between host kernel 4.12 an
On 09/13/2017 04:13 AM, Jason Wang wrote:
>
>
> On 2017年09月13日 09:16, Jason Wang wrote:
>>
>>
>> On 2017年09月13日 01:56, Matthew Rosato wrote:
>>> We are seeing a regression for a subset of workloads across KVM guests
>>> over a virtual bridge between host kernel 4.12 and 4.13. Bisecting
>>> points
On 2017年09月13日 09:16, Jason Wang wrote:
On 2017年09月13日 01:56, Matthew Rosato wrote:
We are seeing a regression for a subset of workloads across KVM guests
over a virtual bridge between host kernel 4.12 and 4.13. Bisecting
points to c67df11f "vhost_net: try batch dequing from skb array"
In t
On 2017年09月13日 01:56, Matthew Rosato wrote:
We are seeing a regression for a subset of workloads across KVM guests
over a virtual bridge between host kernel 4.12 and 4.13. Bisecting
points to c67df11f "vhost_net: try batch dequing from skb array"
In the regressed environment, we are running 4
40 matches
Mail list logo