On Mon, 6 Jul 2020 19:40:50 +0000 Saeed Mahameed wrote: > > I don't really feel too strongly, I'm just trying to get the details > > because I feel like the situation is going to be increasingly common. > > It'd be quite sad if drivers had to reimplement all stats in sw. > > Depends on HW, our HW/FW supports providing stats per (Vport/function). > which means if a packet got lost between the NIC and the netdev queue, > it will be counted as rx-packet/mcast, although we have a private > counter to show this drop in ethtool but will be counted in rx counter > in netdev stats, if we used hw stats. > > so this is why i always prefer SW stats for netdev reported stats, all > we need to count in SW {rx,tx} X {packets, bytes} + rx mcast packets.
If that was indeed the intention it'd had been done in the core, not each driver separately.. > This gives more flexibility and correctness, any given HW can create > multiple netdevs on the same function, we need the netdev stats to > reflect traffic that only went through that netdev. > > > I thought it would be entirely reasonable for the driver to read the > > stats from a delayed work every 1/2 HZ and cache that. We do have a > > knob in ethtool IRQ coalescing settings for stats writeback > > frequency. > > Some customers didn't like this since for drivers that implement this > their CPU power utilization will be slightly higher on idle. Other customers may dislike the per packet cycles. I don't really mind, I just found the commit message to be lacking for a fix, which this supposedly is. Also looks like you report the total number of mcast packets in ethtool -S, which should be identical to ip -s? If so please remove that.