On 16/05/21(Sun) 15:56, Vitaliy Makkoveev wrote:
>
>
> > On 14 May 2021, at 14:43, Martin Pieuchot wrote:
> >
> > On 13/05/21(Thu) 14:50, Vitaliy Makkoveev wrote:
> >> On Thu, May 13, 2021 at 01:15:05PM +0200, Hrvoje Popovski wrote:
> >>> On 13.5.2021. 1:25, Vitaliy Makkoveev wrote:
> It s
> On 14 May 2021, at 14:43, Martin Pieuchot wrote:
>
> On 13/05/21(Thu) 14:50, Vitaliy Makkoveev wrote:
>> On Thu, May 13, 2021 at 01:15:05PM +0200, Hrvoje Popovski wrote:
>>> On 13.5.2021. 1:25, Vitaliy Makkoveev wrote:
It seems this lock order issue is not parallel diff specific.
>>>
>
On 13/05/21(Thu) 14:50, Vitaliy Makkoveev wrote:
> On Thu, May 13, 2021 at 01:15:05PM +0200, Hrvoje Popovski wrote:
> > On 13.5.2021. 1:25, Vitaliy Makkoveev wrote:
> > > It seems this lock order issue is not parallel diff specific.
> >
> >
> >
> > Yes, you are right ... it seemed familiar but
On Thu, May 13, 2021 at 01:15:05PM +0200, Hrvoje Popovski wrote:
> On 13.5.2021. 1:25, Vitaliy Makkoveev wrote:
> > It seems this lock order issue is not parallel diff specific.
>
>
>
> Yes, you are right ... it seemed familiar but i couldn't reproduce it
> on lapc trunk or without this diff so
On 13.5.2021. 1:25, Vitaliy Makkoveev wrote:
> It seems this lock order issue is not parallel diff specific.
Yes, you are right ... it seemed familiar but i couldn't reproduce it
on lapc trunk or without this diff so i thought that parallel diff is
one to blame ..
sorry for noise ..
It seems this lock order issue is not parallel diff specific.
> On 12 May 2021, at 12:58, Hrvoje Popovski wrote:
>
> On 21.4.2021. 21:36, Alexander Bluhm wrote:
>> We need more MP preassure to find such bugs and races. I think now
>> is a good time to give this diff broader testing and commit i
On 21.4.2021. 21:36, Alexander Bluhm wrote:
> We need more MP preassure to find such bugs and races. I think now
> is a good time to give this diff broader testing and commit it.
> You need interfaces with multiple queues to see a difference.
Hi,
while forwarding ip4 traffic over box with parall
On 22.4.2021. 13:08, Hrvoje Popovski wrote:
> On 22.4.2021. 12:38, Alexander Bluhm wrote:
>> It is not clear why the lock helps. Is it a bug in routing or ARP?
>> Or is it just different timing introduced by the additional kernel
>> lock? The parallel network task stress the subsystems of the ker
Hello,
On Thu, Apr 22, 2021 at 03:08:21PM +0200, Mark Kettenis wrote:
> > Date: Thu, 22 Apr 2021 14:43:24 +0200
> > From: Alexandr Nedvedicky
> >
> > Hello,
> >
> > On Thu, Apr 22, 2021 at 01:09:34PM +0200, Alexander Bluhm wrote:
> > > On Thu, Apr 22, 2021 at 12:33:13PM +0200, Hrvoje Popovski w
On 22/04/21(Thu) 15:08, Mark Kettenis wrote:
> > Date: Thu, 22 Apr 2021 14:43:24 +0200
> > From: Alexandr Nedvedicky
> >
> > Hello,
> >
> > On Thu, Apr 22, 2021 at 01:09:34PM +0200, Alexander Bluhm wrote:
> > > On Thu, Apr 22, 2021 at 12:33:13PM +0200, Hrvoje Popovski wrote:
> > > > r620-1# papn
> Date: Thu, 22 Apr 2021 14:43:24 +0200
> From: Alexandr Nedvedicky
>
> Hello,
>
> On Thu, Apr 22, 2021 at 01:09:34PM +0200, Alexander Bluhm wrote:
> > On Thu, Apr 22, 2021 at 12:33:13PM +0200, Hrvoje Popovski wrote:
> > > r620-1# papnpaiancini:cc :p :op
> > > opooolo_llc_ac_caccahhceh_ei_eti_ti
Hello,
On Thu, Apr 22, 2021 at 01:09:34PM +0200, Alexander Bluhm wrote:
> On Thu, Apr 22, 2021 at 12:33:13PM +0200, Hrvoje Popovski wrote:
> > r620-1# papnpaiancini:cc :p :op
> > opooolo_llc_ac_caccahhceh_ei_eti_tieetmme_mm__amgamigacigci__cc_hccehhcekcekc::
> > k :m bmubmfubfuppflp llc pc pcuup
On Thu, Apr 22, 2021 at 12:26:50AM +0200, Alexander Bluhm wrote:
> As a wild guess, you could apply this diff on top. Something similar
> has fixed IPv6 NDP problem I have seen. Maybe it is in the routing
> table, that is used for ARP and NDP.
Here are the performance numbers for forwarding with
On 22.4.2021. 13:42, Mark Kettenis wrote:
>> Date: Thu, 22 Apr 2021 13:09:34 +0200
>> From: Alexander Bluhm
>>
>> On Thu, Apr 22, 2021 at 12:33:13PM +0200, Hrvoje Popovski wrote:
>>> r620-1# papnpaiancini:cc :p :op
>>> opooolo_llc_ac_caccahhceh_ei_eti_tieetmme_mm__amgamigacigci__cc_hccehhcekcekc::
> Date: Thu, 22 Apr 2021 13:09:34 +0200
> From: Alexander Bluhm
>
> On Thu, Apr 22, 2021 at 12:33:13PM +0200, Hrvoje Popovski wrote:
> > r620-1# papnpaiancini:cc :p :op
> > opooolo_llc_ac_caccahhceh_ei_eti_tieetmme_mm__amgamigacigci__cc_hccehhcekcekc::
> > k :m bmubmfubfuppflp llc pc pcuup uf r
On Thu, Apr 22, 2021 at 12:33:13PM +0200, Hrvoje Popovski wrote:
> r620-1# papnpaiancini:cc :p :op
> opooolo_llc_ac_caccahhceh_ei_eti_tieetmme_mm__amgamigacigci__cc_hccehhcekcekc::
> k :m bmubmfubfuppflp llc pc pcuup uf rfferree eel el iilsitss tm tom
> omddoidfiiifeifeidde:d ::i ti etietmme m
>
On 22.4.2021. 12:38, Alexander Bluhm wrote:
> It is not clear why the lock helps. Is it a bug in routing or ARP?
> Or is it just different timing introduced by the additional kernel
> lock? The parallel network task stress the subsystems of the kernel
> more than before with MP load. Having more
On Thu, Apr 22, 2021 at 11:36:07AM +0200, Hrvoje Popovski wrote:
> On 22.4.2021. 11:02, Alexander Bluhm wrote:
> > This was without my kernel lock around ARP bandage, right?
>
> yes, yes ...
Good. Just wanted to be sure.
> > Did you enter boot reboot before doing mach ddbcpu 0xa?
>
> nope... i
On 22.4.2021. 11:36, Hrvoje Popovski wrote:
> if you want i'll try to reproduce in on other boxes..
> maybe i can trigger it here easily because of 2 sockets ?
on second box with 6 x Intel(R) Xeon(R) CPU E5-2643 v2 @ 3.50GHz,
3600.02 MHz..
r620-1# papnpaiancini:cc :p :op
opooolo_llc_ac_caccahhce
On 22.4.2021. 11:02, Alexander Bluhm wrote:
> On Thu, Apr 22, 2021 at 09:03:22AM +0200, Hrvoje Popovski wrote:
>> something like this:
>>
>> x3550m4# pappnaiannc:iicc :p:o ppoolo_oolcla__ddcohoe__gg_eiettt::e m
>> _mmcbmualg2fkpilc2_:: chppeaag
>> gceke: ee mmmbppttuyfyp
>
> This was without my
On Thu, Apr 22, 2021 at 09:03:22AM +0200, Hrvoje Popovski wrote:
> something like this:
>
> x3550m4# pappnaiannc:iicc :p:o ppoolo_oolcla__ddcohoe__gg_eiettt::e m
> _mmcbmualg2fkpilc2_:: chppeaag
> gceke: ee mmmbppttuyfyp
This was without my kernel lock around ARP bandage, right?
> ddb{9}> mac
On 22.4.2021. 1:10, Hrvoje Popovski wrote:
> On 22.4.2021. 0:31, Alexandr Nedvedicky wrote:
>> Hello,
>>
>>
Hi,
with this diff i'm getting panic when i'm pushing traffic over that box.
This is plain forwarding. To compile with witness ?
>>>
>>>
>>> with witness
>>>
>>
>> an
On 22.4.2021. 0:31, Alexandr Nedvedicky wrote:
> Hello,
>
>
>>> Hi,
>>>
>>> with this diff i'm getting panic when i'm pushing traffic over that box.
>>> This is plain forwarding. To compile with witness ?
>>
>>
>> with witness
>>
>
> any chance to check other CPUs to see what code they are e
On 22.4.2021. 0:26, Alexander Bluhm wrote:
> On Wed, Apr 21, 2021 at 11:28:17PM +0200, Hrvoje Popovski wrote:
>> with this diff i'm getting panic when i'm pushing traffic over that box.
>
> Thanks for testing.
>
>> I'm sending traffic from host connected on ix0 from address 10.10.0.1 to
>> host c
On Wed, Apr 21, 2021 at 10:50:40PM +0200, Alexander Bluhm wrote:
> > 1108 pfkeyv2_send(struct socket *so, void *message, int len)
> > 1109 {
> >
> > 2013 ipsec_in_use++;
> > 2014 /*
> > 2015 * XXXSMP IPsec data structures are not
Hello,
> > Hi,
> >
> > with this diff i'm getting panic when i'm pushing traffic over that box.
> > This is plain forwarding. To compile with witness ?
>
>
> with witness
>
any chance to check other CPUs to see what code they are executing?
I hope to be lucky enough and see thread, w
On Wed, Apr 21, 2021 at 11:28:17PM +0200, Hrvoje Popovski wrote:
> with this diff i'm getting panic when i'm pushing traffic over that box.
Thanks for testing.
> I'm sending traffic from host connected on ix0 from address 10.10.0.1 to
> host connected to ix1 to addresses 10.11.0.1 - 10.11.255.255
On 21.4.2021. 23:28, Hrvoje Popovski wrote:
> On 21.4.2021. 21:36, Alexander Bluhm wrote:
>> Hi,
>>
>> For a while we are running network without kernel lock, but with a
>> network lock. The latter is an exclusive sleeping rwlock.
>>
>> It is possible to run the forwarding path in parallel on mult
On 21.4.2021. 21:36, Alexander Bluhm wrote:
> Hi,
>
> For a while we are running network without kernel lock, but with a
> network lock. The latter is an exclusive sleeping rwlock.
>
> It is possible to run the forwarding path in parallel on multiple
> cores. I use ix(4) interfaces which provid
On Wed, Apr 21, 2021 at 11:27:15PM +0300, Vitaliy Makkoveev wrote:
> Did you tested your diff with ipsec(4) enabled?
I enable it for the IPsec tests, but disable it for the others.
Doing IPsec policy checks would also slow down non IPsec network
traffic if there is any flow in the kernel.
> I'm a
Hello,
thanks for good news.
On Wed, Apr 21, 2021 at 10:32:08PM +0200, Alexander Bluhm wrote:
> On Wed, Apr 21, 2021 at 09:59:53PM +0200, Alexandr Nedvedicky wrote:
> > was pf(4) enabled while running those tests?
>
> Yes.
>
> > if pf(4) was enabled while those tests were running,
> >
On Wed, Apr 21, 2021 at 09:59:53PM +0200, Alexandr Nedvedicky wrote:
> was pf(4) enabled while running those tests?
Yes.
> if pf(4) was enabled while those tests were running,
> what rules were loaded to to pf(4)?
Default pf.conf:
# $OpenBSD: pf.conf,v 1.55 2017/12/03 20:40:04
On Wed, Apr 21, 2021 at 09:36:11PM +0200, Alexander Bluhm wrote:
> Hi,
>
> For a while we are running network without kernel lock, but with a
> network lock. The latter is an exclusive sleeping rwlock.
>
> It is possible to run the forwarding path in parallel on multiple
> cores. I use ix(4) in
Hello,
just a quick question:
was pf(4) enabled while running those tests?
if pf(4) was enabled while those tests were running,
what rules were loaded to to pf(4)?
my guess is pf(4) was not enabled when running those tests. if I remember
correctly I could see performance boost by fa
Hi,
For a while we are running network without kernel lock, but with a
network lock. The latter is an exclusive sleeping rwlock.
It is possible to run the forwarding path in parallel on multiple
cores. I use ix(4) interfaces which provide one input queue for
each CPU. For that we have to start
35 matches
Mail list logo