On 02/05, Willem de Bruijn wrote: > On Tue, Feb 5, 2019 at 12:57 PM Stanislav Fomichev <s...@google.com> wrote: > > > > Currently, when eth_get_headlen calls flow dissector, it doesn't pass any > > skb. Because we use passed skb to lookup associated networking namespace > > to find whether we have a BPF program attached or not, we always use > > C-based flow dissector in this case. > > > > The goal of this patch series is to add new networking namespace argument > > to the eth_get_headlen and make BPF flow dissector programs be able to > > work in the skb-less case. > > > > The series goes like this: > > 1. introduce __init_skb and __init_skb_shinfo; those will be used to > > initialize temporary skb > > 2. introduce skb_net which can be used to get networking namespace > > associated with an skb > > 3. add new optional network namespace argument to __skb_flow_dissect and > > plumb through the callers > > 4. add new __flow_bpf_dissect which constructs temporary on-stack skb > > (using __init_skb) and calls BPF flow dissector program > > The main concern I see with this series is this cost of skb zeroing > for every packet in the device driver receive routine, *independent* > from the real skb allocation and zeroing which will likely happen > later. Yes, plus ~200 bytes on the stack for the callers.
Not sure how visible this zeroing though, I can probably try to get some numbers from BPF_PROG_TEST_RUN (running current version vs running with on-stack skb).