Den mån 8 okt. 2018 kl 17:31 skrev Eric Dumazet <eric.duma...@gmail.com>:
>
> On 05/02/2018 04:01 AM, Björn Töpel wrote:
> > From: Björn Töpel <bjorn.to...@intel.com>
> >
> > The xskmap is yet another BPF map, very much inspired by
> > dev/cpu/sockmap, and is a holder of AF_XDP sockets. A user application
> > adds AF_XDP sockets into the map, and by using the bpf_redirect_map
> > helper, an XDP program can redirect XDP frames to an AF_XDP socket.
> >
> > Note that a socket that is bound to certain ifindex/queue index will
> > *only* accept XDP frames from that netdev/queue index. If an XDP
> > program tries to redirect from a netdev/queue index other than what
> > the socket is bound to, the frame will not be received on the socket.
> >
> > A socket can reside in multiple maps.
> >
> > v3: Fixed race and simplified code.
> > v2: Removed one indirection in map lookup.
> >
> > Signed-off-by: Björn Töpel <bjorn.to...@intel.com>
> > ---
> >  include/linux/bpf.h       |  25 +++++
> >  include/linux/bpf_types.h |   3 +
> >  include/net/xdp_sock.h    |   7 ++
> >  include/uapi/linux/bpf.h  |   1 +
> >  kernel/bpf/Makefile       |   3 +
> >  kernel/bpf/verifier.c     |   8 +-
> >  kernel/bpf/xskmap.c       | 239 
> > ++++++++++++++++++++++++++++++++++++++++++++++
> >  net/xdp/xsk.c             |   5 +
> >  8 files changed, 289 insertions(+), 2 deletions(-)
> >  create mode 100644 kernel/bpf/xskmap.c
> >
>
> This function is called under rcu_read_lock() , from map_update_elem()
>
> > +
> > +static int xsk_map_update_elem(struct bpf_map *map, void *key, void *value,
> > +                            u64 map_flags)
> > +{
> > +     struct xsk_map *m = container_of(map, struct xsk_map, map);
> > +     u32 i = *(u32 *)key, fd = *(u32 *)value;
> > +     struct xdp_sock *xs, *old_xs;
> > +     struct socket *sock;
> > +     int err;
> > +
> > +     if (unlikely(map_flags > BPF_EXIST))
> > +             return -EINVAL;
> > +     if (unlikely(i >= m->map.max_entries))
> > +             return -E2BIG;
> > +     if (unlikely(map_flags == BPF_NOEXIST))
> > +             return -EEXIST;
> > +
> > +     sock = sockfd_lookup(fd, &err);
> > +     if (!sock)
> > +             return err;
> > +
> > +     if (sock->sk->sk_family != PF_XDP) {
> > +             sockfd_put(sock);
> > +             return -EOPNOTSUPP;
> > +     }
> > +
> > +     xs = (struct xdp_sock *)sock->sk;
> > +
> > +     if (!xsk_is_setup_for_bpf_map(xs)) {
> > +             sockfd_put(sock);
> > +             return -EOPNOTSUPP;
> > +     }
> > +
> > +     sock_hold(sock->sk);
> > +
> > +     old_xs = xchg(&m->xsk_map[i], xs);
> > +     if (old_xs) {
> > +             /* Make sure we've flushed everything. */
>
> So it is illegal to call synchronize_net(), since it is a reschedule point.
>

Thanks for finding and pointing this out, Eric!

I'll have look and get back with a patch.


Björn


> > +             synchronize_net();
> > +             sock_put((struct sock *)old_xs);
> > +     }
> > +
> > +     sockfd_put(sock);
> > +     return 0;
> > +}
> >
>
>
>

Reply via email to