On Mon, Nov 28, 2016 at 07:48:48AM -0800, David Ahern wrote:
> Code move only; no functional change intended.

not quite...

> Signed-off-by: David Ahern <d...@cumulusnetworks.com>
...
>   * @sk: The socken sending or receiving traffic
> @@ -153,11 +166,15 @@ int __cgroup_bpf_run_filter(struct sock *sk,
>  
>       prog = rcu_dereference(cgrp->bpf.effective[type]);
>       if (prog) {
> -             unsigned int offset = skb->data - skb_network_header(skb);
> -
> -             __skb_push(skb, offset);
> -             ret = bpf_prog_run_save_cb(prog, skb) == 1 ? 0 : -EPERM;
> -             __skb_pull(skb, offset);
> +             switch (type) {
> +             case BPF_CGROUP_INET_INGRESS:
> +             case BPF_CGROUP_INET_EGRESS:
> +                     ret = __cgroup_bpf_run_filter_skb(skb, prog);
> +                     break;

hmm. what's a point of double jump table? It's only burning cycles
in the fast path. We already have
prog = rcu_dereference(cgrp->bpf.effective[type]); if (prog) {...}
Could you do a variant of __cgroup_bpf_run_filter() instead ?
That doesnt't take 'skb' as an argument.
It will also solve scary looking NULL skb from patch 2:
__cgroup_bpf_run_filter(sk, NULL, ...

and to avoid copy-pasting first dozen lines of current
__cgroup_bpf_run_filter can be moved into a helper that
__cgroup_bpf_run_filter_skb and
__cgroup_bpf_run_filter_sk will call.
Or some other way to rearrange that code.

Reply via email to