On Wednesday 29 August 2007 10:15, James Chapman wrote:
> Jan-Bernd Themann wrote:
> > What I'm trying to improve with this approach is interrupt
> > mitigation for NICs where the hardware support for interrupt
> > mitigation is limited. I'm not trying to improve this for NICs
> > that work well wi
On Wednesday 29 August 2007 10:29, David Miller wrote:
> From: Jan-Bernd Themann <[EMAIL PROTECTED]>
> Date: Wed, 29 Aug 2007 09:10:15 +0200
>
> > In the end I want to reduce the CPU utilization. And one way
> > to do that is LRO which also works only well if there are more
> > then just a very fe
From: Jan-Bernd Themann <[EMAIL PROTECTED]>
Date: Wed, 29 Aug 2007 09:10:15 +0200
> In the end I want to reduce the CPU utilization. And one way
> to do that is LRO which also works only well if there are more
> then just a very few packets to aggregate. So at least our
> driver (eHEA) would benef
Jan-Bernd Themann wrote:
Hi David
David Miller schrieb:
Interrupt mitigation only works if it helps you avoid interrupts.
This scheme potentially makes more of them happen.
The hrtimer is just another interrupt, a cpu locally triggered one,
but it has much of the same costs nonetheless.
So if
Hi David
David Miller schrieb:
Interrupt mitigation only works if it helps you avoid interrupts.
This scheme potentially makes more of them happen.
The hrtimer is just another interrupt, a cpu locally triggered one,
but it has much of the same costs nonetheless.
So if you set this timer, it tr
From: Jan-Bernd Themann <[EMAIL PROTECTED]>
Date: Tue, 28 Aug 2007 13:21:09 +0200
> So I guess one solution is to "force" an HW interrupt when two many
> RQs are processed on the same CPU (when no IRQ pinning is
> used). This is something the driver has to handle.
No, the solution is to lock the
From: Jan-Bernd Themann <[EMAIL PROTECTED]>
Date: Tue, 28 Aug 2007 13:21:09 +0200
> Problem for multi queue driver with interrupt distribution scheme set to
> round robin for this simple example:
> Assuming we have 2 SLOW CPUs. CPU_1 is under heavy load (applications). CPU_2
> is not under heavy l
From: Jan-Bernd Themann <[EMAIL PROTECTED]>
Date: Tue, 28 Aug 2007 13:19:03 +0200
> I will try the following scheme (once we get hrtimers): Each device
> (queue) has a hrtimer. Schedule the timer in the poll function
> instead of reactivating IRQs when a high load situation has been
> detected an
Jan-Bernd Themann wrote:
On Tuesday 28 August 2007 11:22, James Chapman wrote:
So in this scheme what runs ->poll() to process incoming packets?
The hrtimer?
No, the regular NAPI networking core calls ->poll() as usual; no timers
are involved. This scheme simply delays the napi_complete() from
On Tue, Aug 28, 2007 at 01:48:20PM +0200, Jan-Bernd Themann ([EMAIL PROTECTED])
wrote:
> I'm not sure if I understand your approach correctly.
> This approach may reduce the number of interrupts, but it does so
> by blocking the CPU for up to 1 jiffy (that can be quite some time
> on some platform
On Tuesday 28 August 2007 11:22, James Chapman wrote:
> > So in this scheme what runs ->poll() to process incoming packets?
> > The hrtimer?
>
> No, the regular NAPI networking core calls ->poll() as usual; no timers
> are involved. This scheme simply delays the napi_complete() from the
> driver
Hi
On Monday 27 August 2007 23:02, David Miller wrote:
> But there are huger fish to fry for you I think. Talk to your
> platform maintainers and ask for an interface for obtaining
> a flat static distribution of interrupts to cpus in order to
> support multiqueue NAPI better.
>
> In your previo
On Monday 27 August 2007 22:37, David Miller wrote:
> From: Jan-Bernd Themann <[EMAIL PROTECTED]>
> Date: Mon, 27 Aug 2007 11:47:01 +0200
>
> > So the question is simply: Do we want drivers that need (benefit
> > from) a timer based polling support to implement their own timers
> > each, or should
David Miller wrote:
From: James Chapman <[EMAIL PROTECTED]>
Date: Mon, 27 Aug 2007 22:41:43 +0100
I don't recall saying anything in previous posts about this. Are you
confusing my posts with Jan-Bernd's?
Yes, my bad.
Jan-Bernd has been talking about using hrtimers to _reschedule_
NAPI. My p
From: James Chapman <[EMAIL PROTECTED]>
Date: Mon, 27 Aug 2007 22:41:43 +0100
> I don't recall saying anything in previous posts about this. Are you
> confusing my posts with Jan-Bernd's?
Yes, my bad.
> Jan-Bernd has been talking about using hrtimers to _reschedule_
> NAPI. My posts are suggest
David Miller wrote:
From: James Chapman <[EMAIL PROTECTED]>
Date: Mon, 27 Aug 2007 16:51:29 +0100
To implement this, there's no need for timers, hrtimers or generic NAPI
support that others have suggested. A driver's poll() would set an
internal flag and record the current jiffies value when f
From: James Chapman <[EMAIL PROTECTED]>
Date: Mon, 27 Aug 2007 16:51:29 +0100
> To implement this, there's no need for timers, hrtimers or generic NAPI
> support that others have suggested. A driver's poll() would set an
> internal flag and record the current jiffies value when finding
> workdo
From: Jan-Bernd Themann <[EMAIL PROTECTED]>
Date: Mon, 27 Aug 2007 11:47:01 +0200
> So the question is simply: Do we want drivers that need (benefit
> from) a timer based polling support to implement their own timers
> each, or should there be a generic support?
I'm trying to figure out how an hr
Jan-Bernd Themann wrote:
On Monday 27 August 2007 17:51, James Chapman wrote:
In the second half of my previous reply (which seems to have been
deleted), I suggest a way to avoid this problem without using hardware
interrupt mitigation / coalescing. Original text is quoted below.
>> I've se
On Monday 27 August 2007 17:51, James Chapman wrote:
> In the second half of my previous reply (which seems to have been
> deleted), I suggest a way to avoid this problem without using hardware
> interrupt mitigation / coalescing. Original text is quoted below.
>
> >> I've seen the same and I'
David Miller wrote:
From: James Chapman <[EMAIL PROTECTED]>
Date: Sun, 26 Aug 2007 20:36:20 +0100
David Miller wrote:
From: James Chapman <[EMAIL PROTECTED]>
Date: Fri, 24 Aug 2007 18:16:45 +0100
Does hardware interrupt mitigation really interact well with NAPI?
It interacts quite excellent
On Monday 27 August 2007 03:58, David Miller wrote:
> From: James Chapman <[EMAIL PROTECTED]>
> Date: Sun, 26 Aug 2007 20:36:20 +0100
>
> > David Miller wrote:
> > > From: James Chapman <[EMAIL PROTECTED]>
> > > Date: Fri, 24 Aug 2007 18:16:45 +0100
> > >
> > >> Does hardware interrupt mitigation
From: James Chapman <[EMAIL PROTECTED]>
Date: Sun, 26 Aug 2007 20:36:20 +0100
> David Miller wrote:
> > From: James Chapman <[EMAIL PROTECTED]>
> > Date: Fri, 24 Aug 2007 18:16:45 +0100
> >
> >> Does hardware interrupt mitigation really interact well with NAPI?
> >
> > It interacts quite excelle
David Miller wrote:
From: James Chapman <[EMAIL PROTECTED]>
Date: Fri, 24 Aug 2007 18:16:45 +0100
Does hardware interrupt mitigation really interact well with NAPI?
It interacts quite excellently.
If NAPI disables interrupts and keeps them disabled while there are more
packets arriving or
On Fri, Aug 24, 2007 at 02:47:11PM -0700, David Miller wrote:
>
> Someone should reference that thread _now_ before this discussion goes
> too far and we repeat a lot of information ..
Here's part of the thread:
http://marc.info/?t=11159530601&r=1&w=2
Also, Jamal's paper may be of i
On Fri, Aug 24, 2007 at 02:44:36PM -0700, David Miller wrote:
> From: David Stevens <[EMAIL PROTECTED]>
> Date: Fri, 24 Aug 2007 09:50:58 -0700
>
> > Problem is if it increases rapidly, you may drop packets
> > before you notice that the ring is full in the current estimated
> > interval.
From: James Chapman <[EMAIL PROTECTED]>
Date: Fri, 24 Aug 2007 18:16:45 +0100
> Does hardware interrupt mitigation really interact well with NAPI?
It interacts quite excellently.
There was a long saga about this with tg3 and huge SGI numa
systems with large costs for interrupt processing, and th
From: David Stevens <[EMAIL PROTECTED]>
Date: Fri, 24 Aug 2007 09:50:58 -0700
> Problem is if it increases rapidly, you may drop packets
> before you notice that the ring is full in the current estimated
> interval.
This is one of many reasons why hardware interrupt mitigation
is really n
From: [EMAIL PROTECTED] (Linas Vepstas)
Date: Fri, 24 Aug 2007 11:45:41 -0500
> In the end, I just let it be, and let the system work as a
> busy-beaver, with the high interrupt rate. Is this a wise thing to
> do?
The tradeoff is always going to be latency vs. throughput.
A sane default should d
From: Jan-Bernd Themann <[EMAIL PROTECTED]>
Date: Fri, 24 Aug 2007 15:59:16 +0200
> 1) The current implementation of netif_rx_schedule, netif_rx_complete
> and the net_rx_action have the following problem: netif_rx_schedule
> sets the NAPI_STATE_SCHED flag and adds the NAPI instance to the p
On Fri, Aug 24, 2007 at 11:11:56PM +0200, Jan-Bernd Themann wrote:
> (when they are available for
> POWER in our case).
hrtimer worked fine on the powerpc cell arch last summer.
I assume they work on p5 and p6 too, no ??
> I tried to implement something with "normal" timers, but the result
> was
From: Jan-Bernd Themann <[EMAIL PROTECTED]>
Date: Fri, 24 Aug 2007 15:59:16 +0200
> It would be nice if it is possible to schedule queues to other CPU's, or
> at least to use interrupts to put the queue to another cpu (not nice for
> as you never know which one you will hit).
> I'm n
Linas Vepstas schrieb:
On Fri, Aug 24, 2007 at 09:04:56PM +0200, Bodo Eggert wrote:
Linas Vepstas <[EMAIL PROTECTED]> wrote:
On Fri, Aug 24, 2007 at 03:59:16PM +0200, Jan-Bernd Themann wrote:
3) On modern systems the incoming packets are processed very fast. Especially
on SMP sy
On Fri, Aug 24, 2007 at 09:04:56PM +0200, Bodo Eggert wrote:
> Linas Vepstas <[EMAIL PROTECTED]> wrote:
> > On Fri, Aug 24, 2007 at 03:59:16PM +0200, Jan-Bernd Themann wrote:
> >> 3) On modern systems the incoming packets are processed very fast.
> >> Especially
> >> on SMP systems when we use mul
Linas Vepstas <[EMAIL PROTECTED]> wrote:
> On Fri, Aug 24, 2007 at 03:59:16PM +0200, Jan-Bernd Themann wrote:
>> 3) On modern systems the incoming packets are processed very fast. Especially
>> on SMP systems when we use multiple queues we process only a few packets
>> per napi poll cycle. So NAPI
James Chapman schrieb:
Stephen Hemminger wrote:
On Fri, 24 Aug 2007 17:47:15 +0200
Jan-Bernd Themann <[EMAIL PROTECTED]> wrote:
Hi,
On Friday 24 August 2007 17:37, [EMAIL PROTECTED] wrote:
On Fri, Aug 24, 2007 at 03:59:16PM +0200, Jan-Bernd Themann wrote:
...
3) On modern systems the in
> Just to be clear, in the previous email I posted on this thread, I
> described a worst-case network ping-pong test case (send a packet, wait
> for reply), and found out that a deffered interrupt scheme just damaged
> the performance of the test case.
When splitting rx and tx handler, I found so
Stephen Hemminger wrote:
On Fri, 24 Aug 2007 17:47:15 +0200
Jan-Bernd Themann <[EMAIL PROTECTED]> wrote:
Hi,
On Friday 24 August 2007 17:37, [EMAIL PROTECTED] wrote:
On Fri, Aug 24, 2007 at 03:59:16PM +0200, Jan-Bernd Themann wrote:
...
3) On modern systems the incoming packets are proce
Just to be clear, in the previous email I posted on this thread, I
described a worst-case network ping-pong test case (send a packet, wait
for reply), and found out that a deffered interrupt scheme just damaged
the performance of the test case. Since the folks who came up with the
test case were
On Fri, Aug 24, 2007 at 08:52:03AM -0700, Stephen Hemminger wrote:
>
> You need hardware support for deferred interrupts. Most devices have it
> (e1000, sky2, tg3)
> and it interacts well with NAPI. It is not a generic thing you want done by
> the stack,
> you want the hardware to hold off inter
Stephen Hemminger <[EMAIL PROTECTED]> wrote on 08/24/2007
08:52:03 AM:
>
> You need hardware support for deferred interrupts. Most devices have it
> (e1000, sky2, tg3)
> and it interacts well with NAPI. It is not a generic thing you want done
by the stack,
> you want the hardware to hold off i
On Fri, Aug 24, 2007 at 03:59:16PM +0200, Jan-Bernd Themann wrote:
> 3) On modern systems the incoming packets are processed very fast. Especially
> on SMP systems when we use multiple queues we process only a few packets
> per napi poll cycle. So NAPI does not work very well here and the
>
On Fri, 24 Aug 2007 17:47:15 +0200
Jan-Bernd Themann <[EMAIL PROTECTED]> wrote:
> Hi,
>
> On Friday 24 August 2007 17:37, [EMAIL PROTECTED] wrote:
> > On Fri, Aug 24, 2007 at 03:59:16PM +0200, Jan-Bernd Themann wrote:
> > > ...
> > > 3) On modern systems the incoming packets are processed ver
Hi,
On Friday 24 August 2007 17:37, [EMAIL PROTECTED] wrote:
> On Fri, Aug 24, 2007 at 03:59:16PM +0200, Jan-Bernd Themann wrote:
> > ...
> > 3) On modern systems the incoming packets are processed very fast.
> > Especially
> > on SMP systems when we use multiple queues we process only a f
On Fri, Aug 24, 2007 at 03:59:16PM +0200, Jan-Bernd Themann wrote:
> ...
> 3) On modern systems the incoming packets are processed very fast. Especially
> on SMP systems when we use multiple queues we process only a few packets
> per napi poll cycle. So NAPI does not work very well here a
45 matches
Mail list logo