On Thu, 2015-11-19 at 17:27 +1030, Jonathan Woithe wrote:
> On Thu, Nov 19, 2015 at 01:56:08AM +0100, Francois Romieu wrote:
> > Jonathan Woithe <[email protected]> :
> > [...]
> > > I could try. What's the most reliable way to determine this? Use regular
> > > ethtool queries about the time of the failures?
> >
> > If you are neither space nor cpu constrained:
> >
> > while : ; do
> > echo $(date +%s.%N):$(ethtool -d eth0 raw on | base64 -w 0):$(ethtool
> > -S eth0 | base64 -w 0)
> > sleep 1
> > done > gloubi
> >
> > Expect 2.5~3Mo per hour.
>
> We're certainly not disc constrained. The high speed acquisition takes a
> fair bit of CPU but there should still be sufficient free to at least give
> the above a go. I'll run some tests tomorrow and see what transpires.
This looks like a race when/if RX happens very shortly after one
transmit.
Francois, are we really saving lot of cpu cycles testing status ?
Following patch would probably reduce the window.
(And doing TX completions before RX might help as well)
diff --git a/drivers/net/ethernet/realtek/r8169.c
b/drivers/net/ethernet/realtek/r8169.c
index
79ef799f88ab1fc2518736163c5674aec360c065..ed3d0322643600c8c3521aa0ac7b935c929527aa
100644
--- a/drivers/net/ethernet/realtek/r8169.c
+++ b/drivers/net/ethernet/realtek/r8169.c
@@ -7541,17 +7541,14 @@ static int rtl8169_poll(struct napi_struct *napi, int
budget)
struct rtl8169_private *tp = container_of(napi, struct rtl8169_private,
napi);
struct net_device *dev = tp->dev;
u16 enable_mask = RTL_EVENT_NAPI | tp->event_slow;
- int work_done= 0;
+ int work_done;
u16 status;
status = rtl_get_events(tp);
rtl_ack_events(tp, status & ~tp->event_slow);
- if (status & RTL_EVENT_NAPI_RX)
- work_done = rtl_rx(dev, tp, (u32) budget);
-
- if (status & RTL_EVENT_NAPI_TX)
- rtl_tx(dev, tp);
+ rtl_tx(dev, tp);
+ work_done = rtl_rx(dev, tp, (u32) budget);
if (status & tp->event_slow) {
enable_mask &= ~tp->event_slow;
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html