From: Of Eric Dumazet
> Sent: 25 November 2016 15:46
> mlx4 stats are chaotic because a deferred work queue is responsible
> to update them every 250 ms.
>
> Even sampling stats every one second with "sar -n DEV 1" gives
> variations like the following :
...
> This patch allows rx/tx bytes/packets counters being folded at the
> time we need stats.
>
> We now can fetch stats every 1 ms if we want to check NIC behavior
> on a small time window. It is also easier to detect anomalies.
...
> Signed-off-by: Eric Dumazet <[email protected]>
> Cc: Tariq Toukan <[email protected]>
...
> for (i = 0; i < priv->rx_ring_num; i++) {
> - stats->rx_packets += priv->rx_ring[i]->packets;
> - stats->rx_bytes += priv->rx_ring[i]->bytes;
> - sw_rx_dropped += priv->rx_ring[i]->dropped;
> - priv->port_stats.rx_chksum_good += priv->rx_ring[i]->csum_ok;
> - priv->port_stats.rx_chksum_none += priv->rx_ring[i]->csum_none;
> - priv->port_stats.rx_chksum_complete +=
> priv->rx_ring[i]->csum_complete;
> - priv->xdp_stats.rx_xdp_drop += priv->rx_ring[i]->xdp_drop;
> - priv->xdp_stats.rx_xdp_tx += priv->rx_ring[i]->xdp_tx;
> - priv->xdp_stats.rx_xdp_tx_full += priv->rx_ring[i]->xdp_tx_full;
> + const struct mlx4_en_rx_ring *ring = priv->rx_ring[i];
> +
> + sw_rx_dropped += READ_ONCE(ring->dropped);
> + priv->port_stats.rx_chksum_good += READ_ONCE(ring->csum_ok);
> + priv->port_stats.rx_chksum_none += READ_ONCE(ring->csum_none);
> + priv->port_stats.rx_chksum_complete +=
> READ_ONCE(ring->csum_complete);
> + priv->xdp_stats.rx_xdp_drop += READ_ONCE(ring->xdp_drop);
> + priv->xdp_stats.rx_xdp_tx += READ_ONCE(ring->xdp_tx);
> + priv->xdp_stats.rx_xdp_tx_full += READ_ONCE(ring->xdp_tx_full);
This chunk (and the one after) seem to be adding READ_ONCE() and don't
seem to be related to the commit message.
David