On Fri, 22 Feb 2019 00:02:23 +0100, Toke Høiland-Jørgensen wrote:
> Jakub Kicinski <jakub.kicin...@netronome.com> writes:
> 
> > On Thu, 21 Feb 2019 12:56:54 +0100, Toke Høiland-Jørgensen wrote:  
> >> A common pattern when using xdp_redirect_map() is to create a device map
> >> where the lookup key is simply ifindex. Because device maps are arrays,
> >> this leaves holes in the map, and the map has to be sized to fit the
> >> largest ifindex, regardless of how many devices actually are actually
> >> needed in the map.
> >> 
> >> This patch adds a second type of device map where the key is interpreted as
> >> an ifindex and looked up using a hashmap, instead of being used as an array
> >> index. This leads to maps being densely packed, so they can be smaller.
> >> 
> >> The default maps used by xdp_redirect() are changed to use the new map
> >> type, which means that xdp_redirect() is no longer limited to ifindex < 64,
> >> but instead to 64 total simultaneous interfaces per network namespace. This
> >> also provides an easy way to compare the performance of devmap and
> >> devmap_idx:
> >> 
> >> xdp_redirect_map (devmap): 8394560 pkt/s
> >> xdp_redirect (devmap_idx): 8179480 pkt/s
> >> 
> >> Difference: 215080 pkt/s or 3.1 nanoseconds per packet.  
> >
> > Could you share what the ifindex mix was here, to arrive at these
> > numbers? How does it compare to using an array but not keying with
> > ifindex?  
> 
> Just the standard set on my test machine; ifindex 1 through 9, except 8
> in this case. So certainly no more than 1 ifindex in each hash bucket
> for those numbers.

Oh, I clearly misread your numbers, it's still slower than array, you
just don't need the size limit.

> >> Signed-off-by: Toke Høiland-Jørgensen <t...@redhat.com>  
> >  
> >> +static int dev_map_idx_update_elem(struct bpf_map *map, void *key, void 
> >> *value,
> >> +                             u64 map_flags)
> >> +{
> >> +  struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
> >> +  struct bpf_dtab_netdev *dev, *old_dev;
> >> +  u32 idx = *(u32 *)key;
> >> +  u32 val = *(u32 *)value;
> >> +  u32 bit;
> >> +
> >> +  if (unlikely(map_flags > BPF_EXIST))
> >> +          return -EINVAL;
> >> +  if (unlikely(map_flags == BPF_NOEXIST))
> >> +          return -EEXIST;
> >> +
> >> +  old_dev = __dev_map_idx_lookup_elem(map, idx);
> >> +  if (!val) {
> >> +          if (!old_dev)
> >> +                  return 0;  
> >
> > IMHO this is a fairly strange mix of array and hashmap semantics. I
> > think you should stick to hashmap behaviour AFA flags and
> > update/delete goes.  
> 
> Yeah, the double book-keeping is a bit strange, but it allows the actual
> forwarding and flush code to be reused between both types of maps. I
> think this is worth the slight semantic confusion :)

I'm not sure I was clear, let me try again :)  Your get_next_key only
reports existing indexes if I read the code right, so that's not an
array - in an array indexes always exist.  What follows inserting 0
should not be equivalent to delete and BPF_NOEXIST should be handled
appropriately.  

Different maps behave differently, I think it's worth trying to limit
the divergence in how things behave to the basic array and a hashmap
models when possible.

Reply via email to