On 12/19/2018 05:40 PM, Jesper Dangaard Brouer wrote: > On Wed, 19 Dec 2018 16:46:05 +0100 > Daniel Borkmann <borkm...@iogearbox.net> wrote: > >> Hmm, why not just doing something as in your example below with napi_poll() >> where you pass in the napi pointer, and then use bpf_probe_read_str() on >> ctx->dev for fetching the name? At least there this should work and should >> be okay given it's rather slow-path event. >> >>> [1] >>> https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/samples/bpf/napi_monitor_kern.c#L34-L130 > > I didn't try to use bpf_probe_read_str() in [1], but that is also not > what I want in my use-case. I don't want the name, but the ifindex to > filter on, as it will be faster. My use-case is allowing my > napi_monitor program to filter on a specific net_device, inside the > kernel via BPF. > > E.g. this didn't work: > bpf_probe_read(&ifindex, 4, &ctx->napi->dev->ifindex);
Something along the lines of this you could try: #define probe_fetch(X) ({typeof(X) val; bpf_probe_read(&val, sizeof(val), &X); val;}) SEC("tracepoint/napi/napi_poll") int napi_poll(struct napi_poll_ctx *ctx) { struct napi_struct *napi = ctx->napi; struct net_device *dev; int ifindex; [...] dev = probe_fetch(napi->dev); ifindex = probe_fetch(dev->ifindex); [...] } > Perhaps you know how I can do this deref correctly? > > My napi_monitor use-case is not a slow-path event, even-though in > optimal cases we should handle 64 packets per tracepoint invocation, > but I'm using this for 100G NICs with >20Mpps. And I mostly use the > tool when something looks wrong and I don't see 64 packet bulks, which > is also why I detect when this gets invoked from idle task or from > ksoftirqd. >