On Sat, Oct 02, 2021 at 03:23:57PM -0700, Chris Cappuccio wrote:
> Hrvoje Popovski [hrv...@srce.hr] wrote:
> >
> > box didn't panic, just stopped forwarding traffic through tunnel.
>
> any chance any progress has been made here? is there any newer versions
> of these diffs floating around?
Main
Hrvoje Popovski [hrv...@srce.hr] wrote:
>
> box didn't panic, just stopped forwarding traffic through tunnel.
any chance any progress has been made here? is there any newer versions
of these diffs floating around?
On 23.7.2021. 16:20, Vitaliy Makkoveev wrote:
> On Thu, Jul 22, 2021 at 11:30:02PM +0200, Hrvoje Popovski wrote:
>> On 22.7.2021. 22:52, Vitaliy Makkoveev wrote:
>>> On Thu, Jul 22, 2021 at 08:38:04PM +0200, Hrvoje Popovski wrote:
On 22.7.2021. 12:21, Hrvoje Popovski wrote:
> Thank you for
On Thu, Jul 22, 2021 at 11:30:02PM +0200, Hrvoje Popovski wrote:
> On 22.7.2021. 22:52, Vitaliy Makkoveev wrote:
> > On Thu, Jul 22, 2021 at 08:38:04PM +0200, Hrvoje Popovski wrote:
> >> On 22.7.2021. 12:21, Hrvoje Popovski wrote:
> >>> Thank you for explanation..
> >>>
> >>> after hitting box all
On 22.7.2021. 22:52, Vitaliy Makkoveev wrote:
> On Thu, Jul 22, 2021 at 08:38:04PM +0200, Hrvoje Popovski wrote:
>> On 22.7.2021. 12:21, Hrvoje Popovski wrote:
>>> Thank you for explanation..
>>>
>>> after hitting box all night, box panic and i was able to reproduce it
>>> this morning ... I'm not
On Thu, Jul 22, 2021 at 08:38:04PM +0200, Hrvoje Popovski wrote:
> On 22.7.2021. 12:21, Hrvoje Popovski wrote:
> > Thank you for explanation..
> >
> > after hitting box all night, box panic and i was able to reproduce it
> > this morning ... I'm not sure but box panic after hour or more of
> > sen
On 22.7.2021. 12:21, Hrvoje Popovski wrote:
> Thank you for explanation..
>
> after hitting box all night, box panic and i was able to reproduce it
> this morning ... I'm not sure but box panic after hour or more of
> sending traffic through iked tunnel ..
> I will try to reproduce it through ipse
On Thu, Jul 22, 2021 at 12:21:47PM +0200, Hrvoje Popovski wrote:
> On 22.7.2021. 0:39, Alexander Bluhm wrote:
> > On Thu, Jul 22, 2021 at 12:06:09AM +0200, Hrvoje Popovski wrote:
> >> I'm combining this and last parallel diff and i can't see any drops in
> >> traffic. Even sending at high rate, tra
On 22.7.2021. 0:39, Alexander Bluhm wrote:
> On Thu, Jul 22, 2021 at 12:06:09AM +0200, Hrvoje Popovski wrote:
>> I'm combining this and last parallel diff and i can't see any drops in
>> traffic. Even sending at high rate, traffic through iked or isakmpd is
>> stable at 150Kpps, which is good ..
>
Hello,
my statement here is just for the record. we should have a follow up
discussion in a different thread, which is yet to be started (when time
will come).
>
> > - Make ARP MP safe. Currently we need the kernel lock there or
> > it crashes. This creates latency for all kind of packets.
On Thu, Jul 22, 2021 at 12:06:09AM +0200, Hrvoje Popovski wrote:
> I'm combining this and last parallel diff and i can't see any drops in
> traffic. Even sending at high rate, traffic through iked or isakmpd is
> stable at 150Kpps, which is good ..
Thanks, good news.
> One funny thing is that wit
On 21.7.2021. 22:21, Alexander Bluhm wrote:
> Ahh, to many diffs in my tree. I have forgotten the cunk
> crp->crp_flags = ... | CRYPTO_F_NOQUEUE
>
> Try this. Still testing it myself, it looks a bit faster.
I'm combining this and last parallel diff and i can't see any drops in
traffic. Even se
On Wed, Jul 21, 2021 at 06:41:30PM +0200, Alexander Bluhm wrote:
> On Mon, Jul 19, 2021 at 07:33:55PM +0300, Vitaliy Makkoveev wrote:
> > Hi, pipex(4) is also not ready for parallel access. In the chunk below
> > it will be accessed through (*ifp->if_input)() -> ether_input() ->
> > pipex_pppoe_inp
On Wed, Jul 21, 2021 at 07:53:55PM +0200, Hrvoje Popovski wrote:
> i've applied this and ipsec crypto no queue diff and i'm getting
> splasserts below ... maybe it's something obvious, if not, i will try
> diff by diff ..
Ahh, to many diffs in my tree. I have forgotten the cunk
crp->crp_flags = .
On 21.7.2021. 18:41, Alexander Bluhm wrote:
> On Mon, Jul 19, 2021 at 07:33:55PM +0300, Vitaliy Makkoveev wrote:
>> Hi, pipex(4) is also not ready for parallel access. In the chunk below
>> it will be accessed through (*ifp->if_input)() -> ether_input() ->
>> pipex_pppoe_input(). This looks not fat
On Mon, Jul 19, 2021 at 07:33:55PM +0300, Vitaliy Makkoveev wrote:
> Hi, pipex(4) is also not ready for parallel access. In the chunk below
> it will be accessed through (*ifp->if_input)() -> ether_input() ->
> pipex_pppoe_input(). This looks not fatal but makes at least session
> statistics incons
On 20/07/21(Tue) 15:46, Alexander Bluhm wrote:
> On Tue, Jul 20, 2021 at 02:26:02PM +0200, Alexander Bluhm wrote:
> > > Note that having multiple threads competing for an exclusive rwlock will
> > > generate unnecessary wakeup/sleep cycles every time the lock is released.
> > > It is valuable to ke
On 20/07/21(Tue) 14:26, Alexander Bluhm wrote:
> On Tue, Jul 20, 2021 at 10:08:09AM +0200, Martin Pieuchot wrote:
> > On 19/07/21(Mon) 17:53, Alexander Bluhm wrote:
> > > Hi,
> > >
> > > I found why the IPsec workaround did not work.
> > >
> > > At init time we set ifiq->ifiq_softnet = net_tq(ifp
On Tue, Jul 20, 2021 at 02:26:02PM +0200, Alexander Bluhm wrote:
> > Note that having multiple threads competing for an exclusive rwlock will
> > generate unnecessary wakeup/sleep cycles every time the lock is released.
> > It is valuable to keep this in mind as it might add extra latency when
> >
On Tue, Jul 20, 2021 at 10:08:09AM +0200, Martin Pieuchot wrote:
> On 19/07/21(Mon) 17:53, Alexander Bluhm wrote:
> > Hi,
> >
> > I found why the IPsec workaround did not work.
> >
> > At init time we set ifiq->ifiq_softnet = net_tq(ifp->if_index +
> > idx), but the workaround modifies net_tq() a
On 19/07/21(Mon) 17:53, Alexander Bluhm wrote:
> Hi,
>
> I found why the IPsec workaround did not work.
>
> At init time we set ifiq->ifiq_softnet = net_tq(ifp->if_index +
> idx), but the workaround modifies net_tq() at runtime. Modifying
> net_tq() at runtime is bad anyway as task_add() and tas
On Mon, Jul 19, 2021 at 10:56:32PM +0200, Alexander Bluhm wrote:
> On Mon, Jul 19, 2021 at 08:02:30PM +0300, Vitaliy Makkoveev wrote:
> > I mean the case when ip_local() called by ip_ours(). Unfortunately, I'm
> > not familiar with PPTP but it looks affected because it don't use tcp or
> > udp as t
On Mon, Jul 19, 2021 at 08:02:30PM +0300, Vitaliy Makkoveev wrote:
> I mean the case when ip_local() called by ip_ours(). Unfortunately, I'm
> not familiar with PPTP but it looks affected because it don't use tcp or
> udp as transport but encapsulates them into ip frames. Sorry for noise
> if I'm w
On Mon, Jul 19, 2021 at 06:40:07PM +0200, Alexander Bluhm wrote:
> On Fri, Jul 09, 2021 at 10:47:49PM +0300, Vitaliy Makkoveev wrote:
> > If I understood your diff right, pipex(4) is also affected through:
> >
> > ip_local()
> >-> ip_deliver()
> > -> (*pr_input)()
> >-> gre_input()
On 19.7.2021. 17:53, Alexander Bluhm wrote:
> Hi,
>
> I found why the IPsec workaround did not work.
>
> At init time we set ifiq->ifiq_softnet = net_tq(ifp->if_index +
> idx), but the workaround modifies net_tq() at runtime. Modifying
> net_tq() at runtime is bad anyway as task_add() and task_d
On Fri, Jul 09, 2021 at 10:47:49PM +0300, Vitaliy Makkoveev wrote:
> If I understood your diff right, pipex(4) is also affected through:
>
> ip_local()
>-> ip_deliver()
> -> (*pr_input)()
>-> gre_input()
> -> gre_input_key()
>-> gre_input_1()
> ->
On Mon, Jul 19, 2021 at 05:53:40PM +0200, Alexander Bluhm wrote:
> Hi,
>
> I found why the IPsec workaround did not work.
>
> At init time we set ifiq->ifiq_softnet = net_tq(ifp->if_index +
> idx), but the workaround modifies net_tq() at runtime. Modifying
> net_tq() at runtime is bad anyway as
Hi,
I found why the IPsec workaround did not work.
At init time we set ifiq->ifiq_softnet = net_tq(ifp->if_index +
idx), but the workaround modifies net_tq() at runtime. Modifying
net_tq() at runtime is bad anyway as task_add() and task_del() could
be called with different task queues.
So bette
On Fri, Jul 09, 2021 at 02:58:50PM +0200, Alexander Bluhm wrote:
> 1. With non parallel forwarding the IPsec traffic stalls after a while.
> esp_input_cb: authentication failed for packet in SA 10.3.45.35/83089fff
Together with tobhe@ we found the issue. The authentication before
decryption uses
Hi,
If I understood your diff right, pipex(4) is also affected through:
ip_local()
-> ip_deliver()
-> (*pr_input)()
-> gre_input()
-> gre_input_key()
-> gre_input_1()
-> pipex_pptp_input()
> On 7 Jul 2021, at 00:05, Alexander Bluhm wrote:
>
> Hi,
I think we see two problems here:
1. With non parallel forwarding the IPsec traffic stalls after a while.
Compiled with ENCDEBUG I get this message for each received ESP packet:
esp_input_cb: authentication failed for packet in SA 10.3.45.35/83089fff
I can reproduce it more or less after 30 sec
On Thu, Jul 08, 2021 at 08:08:23AM +0200, Hrvoje Popovski wrote:
> On 8.7.2021. 0:10, Vitaliy Makkoveev wrote:
> > On Wed, Jul 07, 2021 at 11:07:08PM +0200, Hrvoje Popovski wrote:
> >> On 7.7.2021. 22:36, Vitaliy Makkoveev wrote:
> >>> Thanks. ipsp_spd_lookup() stopped panic in pool_get(9).
> >>>
>
On 8.7.2021. 0:10, Vitaliy Makkoveev wrote:
> On Wed, Jul 07, 2021 at 11:07:08PM +0200, Hrvoje Popovski wrote:
>> On 7.7.2021. 22:36, Vitaliy Makkoveev wrote:
>>> Thanks. ipsp_spd_lookup() stopped panic in pool_get(9).
>>>
>>> I guess the panics continue because simultaneous modifications of
>>> 't
On Wed, Jul 07, 2021 at 11:07:08PM +0200, Hrvoje Popovski wrote:
> On 7.7.2021. 22:36, Vitaliy Makkoveev wrote:
> > Thanks. ipsp_spd_lookup() stopped panic in pool_get(9).
> >
> > I guess the panics continue because simultaneous modifications of
> > 'tdbp->tdb_policy_head' break it. Could you try
On 7.7.2021. 22:36, Vitaliy Makkoveev wrote:
> Thanks. ipsp_spd_lookup() stopped panic in pool_get(9).
>
> I guess the panics continue because simultaneous modifications of
> 'tdbp->tdb_policy_head' break it. Could you try the diff below? It
> introduces `tdb_polhd_mtx' mutex(9) and uses it to pro
On Wed, Jul 07, 2021 at 10:01:59PM +0200, Hrvoje Popovski wrote:
> On 7.7.2021. 19:38, Vitaliy Makkoveev wrote:
> > Hi,
> >
> > It seems the first the first panic occured because ipsp_spd_lookup()
> > modifies tdbp->tdb_policy_head and simultaneous execution breaks it.
> > I guess at least mutex(9
On Tue, Jul 06, 2021 at 11:05:47PM +0200, Alexander Bluhm wrote:
> Hi,
>
> Thank a lot to Hrvoje Popovski for testing my diff and to sashan@
> and dlg@ for fixing all the fallout in pf and pseudo drivers.
>
> Are there any bugs left? I think everything has been fixed.
>
I've just committed
On Wed, Jul 07, 2021 at 01:15:00PM -0700, Chris Cappuccio wrote:
> Alexandr Nedvedicky [alexandr.nedvedi...@oracle.com] wrote:
> > diff --git a/sys/net/if_tpmr.c b/sys/net/if_tpmr.c
> > index f6eb99f347c..4ffa5b18293 100644
> > @@ -725,10 +759,9 @@ tpmr_p_dtor(struct tpmr_softc *sc, struct tpmr_por
Alexandr Nedvedicky [alexandr.nedvedi...@oracle.com] wrote:
> diff --git a/sys/net/if_tpmr.c b/sys/net/if_tpmr.c
> index f6eb99f347c..4ffa5b18293 100644
> @@ -725,10 +759,9 @@ tpmr_p_dtor(struct tpmr_softc *sc, struct tpmr_port *p,
> const char *op)
> if_detachhook_del(ifp0, &p->p_dtask);
>
On 7.7.2021. 19:38, Vitaliy Makkoveev wrote:
> Hi,
>
> It seems the first the first panic occured because ipsp_spd_lookup()
> modifies tdbp->tdb_policy_head and simultaneous execution breaks it.
> I guess at least mutex(9) should be used to protect `tdb_policy_head'.
>
> The second panic occured
On Wed, Jul 07, 2021 at 08:38:23PM +0300, Vitaliy Makkoveev wrote:
> The second panic occured because ipsp_acquire_sa() does
> `ipsec_acquire_pool' initialization in runtime so parallel execution
> breaks it. It's easy to fix.
>
> Could you try the diff below? It moves `ipsec_acquire_pool'
> initi
Hello,
On Wed, Jul 07, 2021 at 06:14:35PM +0200, Alexander Bluhm wrote:
> On Wed, Jul 07, 2021 at 10:20:01AM +0200, Alexandr Nedvedicky wrote:
> > we still need to agree on whether pf_test() can sleep (give up CPU),
> > when processing packet. I don't mind if pf_test() gives up CPU (sleeps
On Wed, Jul 07, 2021 at 02:09:23PM +0200, Alexander Bluhm wrote:
> On Wed, Jul 07, 2021 at 12:52:26PM +0200, Hrvoje Popovski wrote:
> > On 7.7.2021. 12:46, Hrvoje Popovski wrote:
> > > Panic can be triggered when i have parallel diff and sending traffic
> > Different panic on same setup ...
>
> Th
On Wed, Jul 07, 2021 at 10:20:01AM +0200, Alexandr Nedvedicky wrote:
> we still need to agree on whether pf_test() can sleep (give up CPU),
> when processing packet. I don't mind if pf_test() gives up CPU (sleeps),
> when waiting for other packets to finish their business in pf(4).
I t
On Wed, Jul 07, 2021 at 12:52:26PM +0200, Hrvoje Popovski wrote:
> On 7.7.2021. 12:46, Hrvoje Popovski wrote:
> > Panic can be triggered when i have parallel diff and sending traffic
> Different panic on same setup ...
Thanks a lot for the report.
I can see the same stop of traffic and crashes he
On 7.7.2021. 12:46, Hrvoje Popovski wrote:
> Panic can be triggered when i have parallel diff and sending traffic
> over ipsec tunnel and on other side while traffic is flowing i'm
> restarting isakmpd daemon and while negotiating ipsec doing ifconfig ix1
> down && ifconfig ix1 up ... sometimes it
Hi,
i don't want to pollute bluhm@ parallel forwarding mail on tech@ so i'm
sending this report as a separate thread. this panic it's dependent on
bluhm@ parallel diff ... and i found it yesterday
I'm having ipsec tunnel between two hosts without pf and i'm sending
traffic over that tunnel .. i'm
Hello,
On Tue, Jul 06, 2021 at 11:05:47PM +0200, Alexander Bluhm wrote:
> Hi,
>
> Thank a lot to Hrvoje Popovski for testing my diff and to sashan@
> and dlg@ for fixing all the fallout in pf and pseudo drivers.
>
> Are there any bugs left? I think everything has been fixed.
>
there is on
Hi,
Thank a lot to Hrvoje Popovski for testing my diff and to sashan@
and dlg@ for fixing all the fallout in pf and pseudo drivers.
Are there any bugs left? I think everything has been fixed.
To make progress I think it should be commited. Then we get MP
pressure on the network stack in real l
On 16/05/21(Sun) 15:56, Vitaliy Makkoveev wrote:
>
>
> > On 14 May 2021, at 14:43, Martin Pieuchot wrote:
> >
> > On 13/05/21(Thu) 14:50, Vitaliy Makkoveev wrote:
> >> On Thu, May 13, 2021 at 01:15:05PM +0200, Hrvoje Popovski wrote:
> >>> On 13.5.2021. 1:25, Vitaliy Makkoveev wrote:
> It s
> On 14 May 2021, at 14:43, Martin Pieuchot wrote:
>
> On 13/05/21(Thu) 14:50, Vitaliy Makkoveev wrote:
>> On Thu, May 13, 2021 at 01:15:05PM +0200, Hrvoje Popovski wrote:
>>> On 13.5.2021. 1:25, Vitaliy Makkoveev wrote:
It seems this lock order issue is not parallel diff specific.
>>>
>
On 13/05/21(Thu) 14:50, Vitaliy Makkoveev wrote:
> On Thu, May 13, 2021 at 01:15:05PM +0200, Hrvoje Popovski wrote:
> > On 13.5.2021. 1:25, Vitaliy Makkoveev wrote:
> > > It seems this lock order issue is not parallel diff specific.
> >
> >
> >
> > Yes, you are right ... it seemed familiar but
On Thu, May 13, 2021 at 01:15:05PM +0200, Hrvoje Popovski wrote:
> On 13.5.2021. 1:25, Vitaliy Makkoveev wrote:
> > It seems this lock order issue is not parallel diff specific.
>
>
>
> Yes, you are right ... it seemed familiar but i couldn't reproduce it
> on lapc trunk or without this diff so
On 13.5.2021. 1:25, Vitaliy Makkoveev wrote:
> It seems this lock order issue is not parallel diff specific.
Yes, you are right ... it seemed familiar but i couldn't reproduce it
on lapc trunk or without this diff so i thought that parallel diff is
one to blame ..
sorry for noise ..
It seems this lock order issue is not parallel diff specific.
> On 12 May 2021, at 12:58, Hrvoje Popovski wrote:
>
> On 21.4.2021. 21:36, Alexander Bluhm wrote:
>> We need more MP preassure to find such bugs and races. I think now
>> is a good time to give this diff broader testing and commit i
On 21.4.2021. 21:36, Alexander Bluhm wrote:
> We need more MP preassure to find such bugs and races. I think now
> is a good time to give this diff broader testing and commit it.
> You need interfaces with multiple queues to see a difference.
Hi,
while forwarding ip4 traffic over box with parall
On 22.4.2021. 13:08, Hrvoje Popovski wrote:
> On 22.4.2021. 12:38, Alexander Bluhm wrote:
>> It is not clear why the lock helps. Is it a bug in routing or ARP?
>> Or is it just different timing introduced by the additional kernel
>> lock? The parallel network task stress the subsystems of the ker
On Thu, Apr 22, 2021 at 08:34:35PM +0200, Hrvoje Popovski wrote:
> r620-1# ppupaavnamn_iinccif::ca :u p
> lpptooo(ooo0llx__lcc_faacfcafccfhhhfeef__eii_fttifet8eme2m_m2__4m
> m3aamgga9iig0ci8cc,__ cc_hhc0eehxcebcc,kk ::k0 :mm ,bbm ub1ufu)fpf
> pp-ll> lcc ppceup
> fkfrerfererneee e e llli:ils si
Hello,
On Thu, Apr 22, 2021 at 03:08:21PM +0200, Mark Kettenis wrote:
> > Date: Thu, 22 Apr 2021 14:43:24 +0200
> > From: Alexandr Nedvedicky
> >
> > Hello,
> >
> > On Thu, Apr 22, 2021 at 01:09:34PM +0200, Alexander Bluhm wrote:
> > > On Thu, Apr 22, 2021 at 12:33:13PM +0200, Hrvoje Popovski w
On 22/04/21(Thu) 15:08, Mark Kettenis wrote:
> > Date: Thu, 22 Apr 2021 14:43:24 +0200
> > From: Alexandr Nedvedicky
> >
> > Hello,
> >
> > On Thu, Apr 22, 2021 at 01:09:34PM +0200, Alexander Bluhm wrote:
> > > On Thu, Apr 22, 2021 at 12:33:13PM +0200, Hrvoje Popovski wrote:
> > > > r620-1# papn
> Date: Thu, 22 Apr 2021 14:43:24 +0200
> From: Alexandr Nedvedicky
>
> Hello,
>
> On Thu, Apr 22, 2021 at 01:09:34PM +0200, Alexander Bluhm wrote:
> > On Thu, Apr 22, 2021 at 12:33:13PM +0200, Hrvoje Popovski wrote:
> > > r620-1# papnpaiancini:cc :p :op
> > > opooolo_llc_ac_caccahhceh_ei_eti_ti
Hello,
On Thu, Apr 22, 2021 at 01:09:34PM +0200, Alexander Bluhm wrote:
> On Thu, Apr 22, 2021 at 12:33:13PM +0200, Hrvoje Popovski wrote:
> > r620-1# papnpaiancini:cc :p :op
> > opooolo_llc_ac_caccahhceh_ei_eti_tieetmme_mm__amgamigacigci__cc_hccehhcekcekc::
> > k :m bmubmfubfuppflp llc pc pcuup
On Thu, Apr 22, 2021 at 12:26:50AM +0200, Alexander Bluhm wrote:
> As a wild guess, you could apply this diff on top. Something similar
> has fixed IPv6 NDP problem I have seen. Maybe it is in the routing
> table, that is used for ARP and NDP.
Here are the performance numbers for forwarding with
On 22.4.2021. 13:42, Mark Kettenis wrote:
>> Date: Thu, 22 Apr 2021 13:09:34 +0200
>> From: Alexander Bluhm
>>
>> On Thu, Apr 22, 2021 at 12:33:13PM +0200, Hrvoje Popovski wrote:
>>> r620-1# papnpaiancini:cc :p :op
>>> opooolo_llc_ac_caccahhceh_ei_eti_tieetmme_mm__amgamigacigci__cc_hccehhcekcekc::
> Date: Thu, 22 Apr 2021 13:09:34 +0200
> From: Alexander Bluhm
>
> On Thu, Apr 22, 2021 at 12:33:13PM +0200, Hrvoje Popovski wrote:
> > r620-1# papnpaiancini:cc :p :op
> > opooolo_llc_ac_caccahhceh_ei_eti_tieetmme_mm__amgamigacigci__cc_hccehhcekcekc::
> > k :m bmubmfubfuppflp llc pc pcuup uf r
On Thu, Apr 22, 2021 at 12:33:13PM +0200, Hrvoje Popovski wrote:
> r620-1# papnpaiancini:cc :p :op
> opooolo_llc_ac_caccahhceh_ei_eti_tieetmme_mm__amgamigacigci__cc_hccehhcekcekc::
> k :m bmubmfubfuppflp llc pc pcuup uf rfferree eel el iilsitss tm tom
> omddoidfiiifeifeidde:d ::i ti etietmme m
>
On 22.4.2021. 12:38, Alexander Bluhm wrote:
> It is not clear why the lock helps. Is it a bug in routing or ARP?
> Or is it just different timing introduced by the additional kernel
> lock? The parallel network task stress the subsystems of the kernel
> more than before with MP load. Having more
On Thu, Apr 22, 2021 at 11:36:07AM +0200, Hrvoje Popovski wrote:
> On 22.4.2021. 11:02, Alexander Bluhm wrote:
> > This was without my kernel lock around ARP bandage, right?
>
> yes, yes ...
Good. Just wanted to be sure.
> > Did you enter boot reboot before doing mach ddbcpu 0xa?
>
> nope... i
On 22.4.2021. 11:36, Hrvoje Popovski wrote:
> if you want i'll try to reproduce in on other boxes..
> maybe i can trigger it here easily because of 2 sockets ?
on second box with 6 x Intel(R) Xeon(R) CPU E5-2643 v2 @ 3.50GHz,
3600.02 MHz..
r620-1# papnpaiancini:cc :p :op
opooolo_llc_ac_caccahhce
On 22.4.2021. 11:02, Alexander Bluhm wrote:
> On Thu, Apr 22, 2021 at 09:03:22AM +0200, Hrvoje Popovski wrote:
>> something like this:
>>
>> x3550m4# pappnaiannc:iicc :p:o ppoolo_oolcla__ddcohoe__gg_eiettt::e m
>> _mmcbmualg2fkpilc2_:: chppeaag
>> gceke: ee mmmbppttuyfyp
>
> This was without my
On Thu, Apr 22, 2021 at 09:03:22AM +0200, Hrvoje Popovski wrote:
> something like this:
>
> x3550m4# pappnaiannc:iicc :p:o ppoolo_oolcla__ddcohoe__gg_eiettt::e m
> _mmcbmualg2fkpilc2_:: chppeaag
> gceke: ee mmmbppttuyfyp
This was without my kernel lock around ARP bandage, right?
> ddb{9}> mac
On 22.4.2021. 1:10, Hrvoje Popovski wrote:
> On 22.4.2021. 0:31, Alexandr Nedvedicky wrote:
>> Hello,
>>
>>
Hi,
with this diff i'm getting panic when i'm pushing traffic over that box.
This is plain forwarding. To compile with witness ?
>>>
>>>
>>> with witness
>>>
>>
>> an
On 22.4.2021. 0:31, Alexandr Nedvedicky wrote:
> Hello,
>
>
>>> Hi,
>>>
>>> with this diff i'm getting panic when i'm pushing traffic over that box.
>>> This is plain forwarding. To compile with witness ?
>>
>>
>> with witness
>>
>
> any chance to check other CPUs to see what code they are e
On 22.4.2021. 0:26, Alexander Bluhm wrote:
> On Wed, Apr 21, 2021 at 11:28:17PM +0200, Hrvoje Popovski wrote:
>> with this diff i'm getting panic when i'm pushing traffic over that box.
>
> Thanks for testing.
>
>> I'm sending traffic from host connected on ix0 from address 10.10.0.1 to
>> host c
On Wed, Apr 21, 2021 at 10:50:40PM +0200, Alexander Bluhm wrote:
> > 1108 pfkeyv2_send(struct socket *so, void *message, int len)
> > 1109 {
> >
> > 2013 ipsec_in_use++;
> > 2014 /*
> > 2015 * XXXSMP IPsec data structures are not
Hello,
> > Hi,
> >
> > with this diff i'm getting panic when i'm pushing traffic over that box.
> > This is plain forwarding. To compile with witness ?
>
>
> with witness
>
any chance to check other CPUs to see what code they are executing?
I hope to be lucky enough and see thread, w
On Wed, Apr 21, 2021 at 11:28:17PM +0200, Hrvoje Popovski wrote:
> with this diff i'm getting panic when i'm pushing traffic over that box.
Thanks for testing.
> I'm sending traffic from host connected on ix0 from address 10.10.0.1 to
> host connected to ix1 to addresses 10.11.0.1 - 10.11.255.255
On 21.4.2021. 23:28, Hrvoje Popovski wrote:
> On 21.4.2021. 21:36, Alexander Bluhm wrote:
>> Hi,
>>
>> For a while we are running network without kernel lock, but with a
>> network lock. The latter is an exclusive sleeping rwlock.
>>
>> It is possible to run the forwarding path in parallel on mult
On 21.4.2021. 21:36, Alexander Bluhm wrote:
> Hi,
>
> For a while we are running network without kernel lock, but with a
> network lock. The latter is an exclusive sleeping rwlock.
>
> It is possible to run the forwarding path in parallel on multiple
> cores. I use ix(4) interfaces which provid
On Wed, Apr 21, 2021 at 11:27:15PM +0300, Vitaliy Makkoveev wrote:
> Did you tested your diff with ipsec(4) enabled?
I enable it for the IPsec tests, but disable it for the others.
Doing IPsec policy checks would also slow down non IPsec network
traffic if there is any flow in the kernel.
> I'm a
Hello,
thanks for good news.
On Wed, Apr 21, 2021 at 10:32:08PM +0200, Alexander Bluhm wrote:
> On Wed, Apr 21, 2021 at 09:59:53PM +0200, Alexandr Nedvedicky wrote:
> > was pf(4) enabled while running those tests?
>
> Yes.
>
> > if pf(4) was enabled while those tests were running,
> >
On Wed, Apr 21, 2021 at 09:59:53PM +0200, Alexandr Nedvedicky wrote:
> was pf(4) enabled while running those tests?
Yes.
> if pf(4) was enabled while those tests were running,
> what rules were loaded to to pf(4)?
Default pf.conf:
# $OpenBSD: pf.conf,v 1.55 2017/12/03 20:40:04
On Wed, Apr 21, 2021 at 09:36:11PM +0200, Alexander Bluhm wrote:
> Hi,
>
> For a while we are running network without kernel lock, but with a
> network lock. The latter is an exclusive sleeping rwlock.
>
> It is possible to run the forwarding path in parallel on multiple
> cores. I use ix(4) in
Hello,
just a quick question:
was pf(4) enabled while running those tests?
if pf(4) was enabled while those tests were running,
what rules were loaded to to pf(4)?
my guess is pf(4) was not enabled when running those tests. if I remember
correctly I could see performance boost by fa
Hi,
For a while we are running network without kernel lock, but with a
network lock. The latter is an exclusive sleeping rwlock.
It is possible to run the forwarding path in parallel on multiple
cores. I use ix(4) interfaces which provide one input queue for
each CPU. For that we have to start
85 matches
Mail list logo