On Mon, 22 Jul 2019 13:52:48 +0200
Toke Høiland-Jørgensen <t...@redhat.com> wrote:

> +static inline struct hlist_head *dev_map_index_hash(struct bpf_dtab *dtab,
> +                                                 int idx)
> +{
> +     return &dtab->dev_index_head[idx & (NETDEV_HASHENTRIES - 1)];
> +}

It is good for performance that our "hash" function is simply an AND
operation on the idx.  We want to keep it this way.

I don't like that you are using NETDEV_HASHENTRIES, because the BPF map
infrastructure already have a way to specify the map size (struct
bpf_map_def .max_entries).  BUT for performance reasons, to keep the
AND operation, we would need to round up the hash-array size to nearest
power of 2 (or reject if user didn't specify a power of 2, if we want
to "expose" this limit to users).

> +struct bpf_dtab_netdev *__dev_map_hash_lookup_elem(struct bpf_map *map, u32 
> key)
> +{
> +     struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
> +     struct hlist_head *head = dev_map_index_hash(dtab, key);
> +     struct bpf_dtab_netdev *dev;
> +
> +     hlist_for_each_entry_rcu(dev, head, index_hlist)
> +             if (dev->idx == key)
> +                     return dev;
> +
> +     return NULL;
> +}

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Reply via email to