On Mon, Mar 27, 2017 at 12:13:24PM +0200, Miroslav Lichvar wrote: > On Fri, Mar 24, 2017 at 10:17:51AM -0700, Denny Page wrote: > > I should have remembered this yesterday... I went and looked at my favorite > > driver, Intel's igb. Not only is the igb driver already caching link speed, > > it is also performing timestamp correction based on that link speed. > > Isn't the i210 the only NIC for which the correction is actually > implemented?
Yes. > Will this ever be done for all HW with timestamping > support, so that the applications wouldn't have to care about link > speed? No. At the end of the day, the correction in the igb driver is useless and even harmful. Why? Because if the app cares about this level of accuracy, it is going to have to implement special logic anyhow, and having a special case for the igb is even more work for the app. In addition, if you look into the igb data sheet, you will find a range of correction values, with little indication of how they measured the latency and what the ranges depend on. In my experiments, I have seen the igb consistently land on the extreme of one of the ranges (who knows why), but the driver corrects using the average, forcing me then to correct the remaining offset by hand. > > I believe that timestamp correction, whether it be speed based latency, > > header -> trailer, or whatever else might be needed later down the line, > > are properly done in the driver. It’s a lot for the application to try and > > figure out if it should or should not be doing corrections and what > > correction to apply. The driver knows. > > I agree, but I'm not sure how feasible that is. +1 Thanks, Richard