On Sat, Oct 02, 2021 at 03:23:57PM -0700, Chris Cappuccio wrote:
> Hrvoje Popovski [hrv...@srce.hr] wrote:
> >
> > box didn't panic, just stopped forwarding traffic through tunnel.
>
> any chance any progress has been made here? is there any newer versions
> of these diffs floating around?
Main
Hrvoje Popovski [hrv...@srce.hr] wrote:
>
> box didn't panic, just stopped forwarding traffic through tunnel.
any chance any progress has been made here? is there any newer versions
of these diffs floating around?
On 23.7.2021. 16:20, Vitaliy Makkoveev wrote:
> On Thu, Jul 22, 2021 at 11:30:02PM +0200, Hrvoje Popovski wrote:
>> On 22.7.2021. 22:52, Vitaliy Makkoveev wrote:
>>> On Thu, Jul 22, 2021 at 08:38:04PM +0200, Hrvoje Popovski wrote:
On 22.7.2021. 12:21, Hrvoje Popovski wrote:
> Thank you for
On Thu, Jul 22, 2021 at 11:30:02PM +0200, Hrvoje Popovski wrote:
> On 22.7.2021. 22:52, Vitaliy Makkoveev wrote:
> > On Thu, Jul 22, 2021 at 08:38:04PM +0200, Hrvoje Popovski wrote:
> >> On 22.7.2021. 12:21, Hrvoje Popovski wrote:
> >>> Thank you for explanation..
> >>>
> >>> after hitting box all
On 22.7.2021. 22:52, Vitaliy Makkoveev wrote:
> On Thu, Jul 22, 2021 at 08:38:04PM +0200, Hrvoje Popovski wrote:
>> On 22.7.2021. 12:21, Hrvoje Popovski wrote:
>>> Thank you for explanation..
>>>
>>> after hitting box all night, box panic and i was able to reproduce it
>>> this morning ... I'm not
On Thu, Jul 22, 2021 at 08:38:04PM +0200, Hrvoje Popovski wrote:
> On 22.7.2021. 12:21, Hrvoje Popovski wrote:
> > Thank you for explanation..
> >
> > after hitting box all night, box panic and i was able to reproduce it
> > this morning ... I'm not sure but box panic after hour or more of
> > sen
On 22.7.2021. 12:21, Hrvoje Popovski wrote:
> Thank you for explanation..
>
> after hitting box all night, box panic and i was able to reproduce it
> this morning ... I'm not sure but box panic after hour or more of
> sending traffic through iked tunnel ..
> I will try to reproduce it through ipse
On Thu, Jul 22, 2021 at 12:21:47PM +0200, Hrvoje Popovski wrote:
> On 22.7.2021. 0:39, Alexander Bluhm wrote:
> > On Thu, Jul 22, 2021 at 12:06:09AM +0200, Hrvoje Popovski wrote:
> >> I'm combining this and last parallel diff and i can't see any drops in
> >> traffic. Even sending at high rate, tra
On 22.7.2021. 0:39, Alexander Bluhm wrote:
> On Thu, Jul 22, 2021 at 12:06:09AM +0200, Hrvoje Popovski wrote:
>> I'm combining this and last parallel diff and i can't see any drops in
>> traffic. Even sending at high rate, traffic through iked or isakmpd is
>> stable at 150Kpps, which is good ..
>
Hello,
my statement here is just for the record. we should have a follow up
discussion in a different thread, which is yet to be started (when time
will come).
>
> > - Make ARP MP safe. Currently we need the kernel lock there or
> > it crashes. This creates latency for all kind of packets.
On Thu, Jul 22, 2021 at 12:06:09AM +0200, Hrvoje Popovski wrote:
> I'm combining this and last parallel diff and i can't see any drops in
> traffic. Even sending at high rate, traffic through iked or isakmpd is
> stable at 150Kpps, which is good ..
Thanks, good news.
> One funny thing is that wit
On 21.7.2021. 22:21, Alexander Bluhm wrote:
> Ahh, to many diffs in my tree. I have forgotten the cunk
> crp->crp_flags = ... | CRYPTO_F_NOQUEUE
>
> Try this. Still testing it myself, it looks a bit faster.
I'm combining this and last parallel diff and i can't see any drops in
traffic. Even se
On Wed, Jul 21, 2021 at 06:41:30PM +0200, Alexander Bluhm wrote:
> On Mon, Jul 19, 2021 at 07:33:55PM +0300, Vitaliy Makkoveev wrote:
> > Hi, pipex(4) is also not ready for parallel access. In the chunk below
> > it will be accessed through (*ifp->if_input)() -> ether_input() ->
> > pipex_pppoe_inp
On Wed, Jul 21, 2021 at 07:53:55PM +0200, Hrvoje Popovski wrote:
> i've applied this and ipsec crypto no queue diff and i'm getting
> splasserts below ... maybe it's something obvious, if not, i will try
> diff by diff ..
Ahh, to many diffs in my tree. I have forgotten the cunk
crp->crp_flags = .
On 21.7.2021. 18:41, Alexander Bluhm wrote:
> On Mon, Jul 19, 2021 at 07:33:55PM +0300, Vitaliy Makkoveev wrote:
>> Hi, pipex(4) is also not ready for parallel access. In the chunk below
>> it will be accessed through (*ifp->if_input)() -> ether_input() ->
>> pipex_pppoe_input(). This looks not fat
On Mon, Jul 19, 2021 at 07:33:55PM +0300, Vitaliy Makkoveev wrote:
> Hi, pipex(4) is also not ready for parallel access. In the chunk below
> it will be accessed through (*ifp->if_input)() -> ether_input() ->
> pipex_pppoe_input(). This looks not fatal but makes at least session
> statistics incons
On 20/07/21(Tue) 15:46, Alexander Bluhm wrote:
> On Tue, Jul 20, 2021 at 02:26:02PM +0200, Alexander Bluhm wrote:
> > > Note that having multiple threads competing for an exclusive rwlock will
> > > generate unnecessary wakeup/sleep cycles every time the lock is released.
> > > It is valuable to ke
On 20/07/21(Tue) 14:26, Alexander Bluhm wrote:
> On Tue, Jul 20, 2021 at 10:08:09AM +0200, Martin Pieuchot wrote:
> > On 19/07/21(Mon) 17:53, Alexander Bluhm wrote:
> > > Hi,
> > >
> > > I found why the IPsec workaround did not work.
> > >
> > > At init time we set ifiq->ifiq_softnet = net_tq(ifp
On Tue, Jul 20, 2021 at 03:41:32PM +0200, Alexander Bluhm wrote:
> Hi,
>
> The current workaround to disable parallel IPsec does not work.
> Variable nettaskqs must not change at runtime. Interface input
> queues choose the thread during init with ifiq_softnet = net_tq().
> So it cannot be modifi
On Tue, Jul 20, 2021 at 02:26:02PM +0200, Alexander Bluhm wrote:
> > Note that having multiple threads competing for an exclusive rwlock will
> > generate unnecessary wakeup/sleep cycles every time the lock is released.
> > It is valuable to keep this in mind as it might add extra latency when
> >
Hi,
The current workaround to disable parallel IPsec does not work.
Variable nettaskqs must not change at runtime. Interface input
queues choose the thread during init with ifiq_softnet = net_tq().
So it cannot be modified after pfkeyv2_send() sets the first SA in
kernel. Also changing the calcu
On Tue, Jul 20, 2021 at 10:08:09AM +0200, Martin Pieuchot wrote:
> On 19/07/21(Mon) 17:53, Alexander Bluhm wrote:
> > Hi,
> >
> > I found why the IPsec workaround did not work.
> >
> > At init time we set ifiq->ifiq_softnet = net_tq(ifp->if_index +
> > idx), but the workaround modifies net_tq() a
On 19/07/21(Mon) 17:53, Alexander Bluhm wrote:
> Hi,
>
> I found why the IPsec workaround did not work.
>
> At init time we set ifiq->ifiq_softnet = net_tq(ifp->if_index +
> idx), but the workaround modifies net_tq() at runtime. Modifying
> net_tq() at runtime is bad anyway as task_add() and tas
On 19.7.2021. 17:53, Alexander Bluhm wrote:
> Hi,
>
> I found why the IPsec workaround did not work.
>
> At init time we set ifiq->ifiq_softnet = net_tq(ifp->if_index +
> idx), but the workaround modifies net_tq() at runtime. Modifying
> net_tq() at runtime is bad anyway as task_add() and task_d
On Mon, Jul 19, 2021 at 05:53:40PM +0200, Alexander Bluhm wrote:
> Hi,
>
> I found why the IPsec workaround did not work.
>
> At init time we set ifiq->ifiq_softnet = net_tq(ifp->if_index +
> idx), but the workaround modifies net_tq() at runtime. Modifying
> net_tq() at runtime is bad anyway as
Hi,
I found why the IPsec workaround did not work.
At init time we set ifiq->ifiq_softnet = net_tq(ifp->if_index +
idx), but the workaround modifies net_tq() at runtime. Modifying
net_tq() at runtime is bad anyway as task_add() and task_del() could
be called with different task queues.
So bette
26 matches
Mail list logo