On 06/04/2019 04:54 PM, Daniel Borkmann wrote: > On 06/03/2019 06:38 PM, Jonathan Lemon wrote: >> Currently, the AF_XDP code uses a separate map in order to >> determine if an xsk is bound to a queue. Instead of doing this, >> have bpf_map_lookup_elem() return the queue_id, as a way of >> indicating that there is a valid entry at the map index. >> >> Rearrange some xdp_sock members to eliminate structure holes. >> >> Signed-off-by: Jonathan Lemon <jonathan.le...@gmail.com> >> Acked-by: Song Liu <songliubrav...@fb.com> >> Acked-by: Björn Töpel <bjorn.to...@intel.com> >> --- >> include/net/xdp_sock.h | 6 +++--- >> kernel/bpf/verifier.c | 6 +++++- >> kernel/bpf/xskmap.c | 4 +++- >> .../selftests/bpf/verifier/prevent_map_lookup.c | 15 --------------- >> 4 files changed, 11 insertions(+), 20 deletions(-) >> >> diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h >> index d074b6d60f8a..7d84b1da43d2 100644 >> --- a/include/net/xdp_sock.h >> +++ b/include/net/xdp_sock.h >> @@ -57,12 +57,12 @@ struct xdp_sock { >> struct net_device *dev; >> struct xdp_umem *umem; >> struct list_head flush_node; >> - u16 queue_id; >> - struct xsk_queue *tx ____cacheline_aligned_in_smp; >> - struct list_head list; >> + u32 queue_id; >> bool zc; >> /* Protects multiple processes in the control path */ >> struct mutex mutex; >> + struct xsk_queue *tx ____cacheline_aligned_in_smp; >> + struct list_head list; >> /* Mutual exclusion of NAPI TX thread and sendmsg error paths >> * in the SKB destructor callback. >> */ >> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c >> index 2778417e6e0c..91c730f85e92 100644 >> --- a/kernel/bpf/verifier.c >> +++ b/kernel/bpf/verifier.c >> @@ -2905,10 +2905,14 @@ static int check_map_func_compatibility(struct >> bpf_verifier_env *env, >> * appear. >> */ >> case BPF_MAP_TYPE_CPUMAP: >> - case BPF_MAP_TYPE_XSKMAP: >> if (func_id != BPF_FUNC_redirect_map) >> goto error; >> break; >> + case BPF_MAP_TYPE_XSKMAP: >> + if (func_id != BPF_FUNC_redirect_map && >> + func_id != BPF_FUNC_map_lookup_elem) >> + goto error; >> + break; >> case BPF_MAP_TYPE_ARRAY_OF_MAPS: >> case BPF_MAP_TYPE_HASH_OF_MAPS: >> if (func_id != BPF_FUNC_map_lookup_elem) >> diff --git a/kernel/bpf/xskmap.c b/kernel/bpf/xskmap.c >> index 686d244e798d..249b22089014 100644 >> --- a/kernel/bpf/xskmap.c >> +++ b/kernel/bpf/xskmap.c >> @@ -154,7 +154,9 @@ void __xsk_map_flush(struct bpf_map *map) >> >> static void *xsk_map_lookup_elem(struct bpf_map *map, void *key) >> { >> - return ERR_PTR(-EOPNOTSUPP); >> + struct xdp_sock *xs = __xsk_map_lookup_elem(map, *(u32 *)key); >> + >> + return xs ? &xs->queue_id : NULL; >> } > > How do you guarantee that BPf programs don't mess around with the map values > e.g. overriding xs->queue_id from the lookup? This should be read-only map > from BPF program side.
(Or via per-cpu scratch var where you move xs->queue_id into and return from here.)