On Thu, Feb 25, 2021 at 11:52:19PM +0100, Andrew Lunn wrote:
> On Thu, Feb 25, 2021 at 02:18:32PM +0200, Vladimir Oltean wrote:
> > @@ -327,8 +329,8 @@ static void enetc_get_tx_tstamp(struct enetc_hw *hw,
> > union enetc_tx_bd *txbd,
> > {
> > u32 lo, hi, tstamp_lo;
> >
> > - lo = enetc_rd(hw, ENETC_SICTR0);
> > - hi = enetc_rd(hw, ENETC_SICTR1);
> > + lo = enetc_rd_hot(hw, ENETC_SICTR0);
> > + hi = enetc_rd_hot(hw, ENETC_SICTR1);
> > tstamp_lo = le32_to_cpu(txbd->wb.tstamp);
> > if (lo <= tstamp_lo)
> > hi -= 1;
>
> Hi Vladimir
>
> This change is not obvious, and there is no mention of it in the
> commit message. Please could you explain it. I guess it is to do with
> enetc_get_tx_tstamp() being called with the MDIO lock held now, when
> it was not before?
I realize this is an uncharacteristically short commit message and I'm
sorry for that, if needed I can resend.
Your assumption is correct, the new call path is:
enetc_msix
-> napi_schedule
-> enetc_poll
-> enetc_lock_mdio
-> enetc_clean_tx_ring
-> enetc_get_tx_tstamp
-> enetc_clean_rx_ring
-> enetc_unlock_mdio
The 'hot' accessors are for normal, 'unlocked' register reads and
writes, while enetc_rd contains enetc_lock_mdio, followed by the actual
read, followed by enetc_unlock_mdio.
The goal is to eventually get rid of all the _hot stuff and always take
the lock from the top level, this would allow us to do more register
read/write batching and that would amortize the cost of the locking
overall.