On 2019/8/30 17:32, Eric Dumazet wrote:

On 8/30/19 10:35 AM, Zhu Yanjun wrote:
When testing with a background iperf pushing 1Gbit/sec traffic and running
both ifconfig and netstat to collect statistics, some deadlocks occurred.

This is quite a heavy patch trying to fix a bug...
This is to use per-cpu variable. Perhaps the changes are big.

I suspect the root cause has nothing to do with stat
collection since on 64bit arches there is no additional synchronization.
This bug is similar to the one that the commit 5f6b4e14cada ("net: dsa: User per-cpu 64-bit statistics") tries to fix.
So a similar patch is to fix this similar bug in forcedeth.
(u64_stats_update_begin(), u64_stats_update_end() are nops)
Sure. Exactly.

+static inline void nv_get_stats(int cpu, struct fe_priv *np,
+                               struct rtnl_link_stats64 *storage)
+{
+       struct nv_txrx_stats *src = per_cpu_ptr(np->txrx_stats, cpu);
+       unsigned int syncp_start;
+
+       do {
+               syncp_start = u64_stats_fetch_begin_irq(&np->swstats_rx_syncp);
+               storage->rx_packets       += src->stat_rx_packets;
+               storage->rx_bytes         += src->stat_rx_bytes;
+               storage->rx_dropped       += src->stat_rx_dropped;
+               storage->rx_missed_errors += src->stat_rx_missed_errors;
+       } while (u64_stats_fetch_retry_irq(&np->swstats_rx_syncp, syncp_start));
+
+       do {
+               syncp_start = u64_stats_fetch_begin_irq(&np->swstats_tx_syncp);
+               storage->tx_packets += src->stat_tx_packets;
+               storage->tx_bytes   += src->stat_tx_bytes;
+               storage->tx_dropped += src->stat_tx_dropped;
+       } while (u64_stats_fetch_retry_irq(&np->swstats_tx_syncp, syncp_start));
+}
+

This is buggy :
If the loops are ever restarted, the storage->fields will have
been modified multiple times.
Sure. Sorry. My bad.
A similar changes in the commit 5f6b4e14cada ("net: dsa: User per-cpu 64-bit statistics").
I will use this similar changes.
I will send V2 soon.

Thanks a lot for your comments.

Zhu Yanjun



Reply via email to