Hi all, I'm still crashing my head on this item...
On Wed, 2018-04-18 at 09:44 -0700, John Fastabend wrote: > There is a set of conditions > that if met we can run without the lock. Possibly ONETXQUEUE and > aligned cpu_map is sufficient. We could detect this case and drop > the locking. For existing systems and high Gbps NICs I think (feel > free to correct me) assuming a core per cpu is OK. At some point > though we probably need to revisit this assumption. I think we can improve measurably moving the __QDISC_STATE_RUNNING bit fiddling around the __qdisc_run() call in the 'lockless' path, instead of keeping it inside __qdisc_restart(). Currently, in the single sender, pkt rate below link-limit scenario we hit the atomic bit overhead twice per xmitted packet: one for each dequeue, plus another one for the next, failing, dequeue attempt. With the wider scope we will hit it always only once. After that change __QDISC_STATE_RUNNING usage will look a bit like qdisc_lock(), for the dequeue part at least. So I'm wondering if we could replace __QDISC_STATE_RUNNING with spin_trylock(qdisc_lock()) _and_ keep such lock held for the whole qdisc_run() !?! The comment above qdisc_restart() states clearly we can't, but I don't see why !?! Acquiring qdisc_lock() and xmit lock always in the given sequence looks safe to me. Can someone please explain? Is there some possible deathlock condition I'm missing ?!? It looks like the comment itself cames directly from the pre-bitkeeper era (modulo locks name change). Performance wise, acquiring the qdisc_lock only once per xmitted packet should improve considerably 'locked' qdisc performance, both in the contented and in the uncontended scenario (and some quick experiments seems to confirm that). Thanks, Paolo