On Fri, Nov 11, 2016 at 10:55:10AM -0800, Martin KaFai Lau wrote:
> Provide a LRU version of the existing BPF_MAP_TYPE_PERCPU_HASH
> 
> Signed-off-by: Martin KaFai Lau <ka...@fb.com>
...
> +     /* For LRU, we need to alloc before taking bucket's
> +      * spinlock because LRU's elem alloc may need
> +      * to remove older elem from htab and this removal
> +      * operation will need a bucket lock.
> +      */
> +     if (map_flags != BPF_EXIST) {
> +             l_new = prealloc_lru_pop(htab, key, hash);
> +             if (!l_new)
> +                     return -ENOMEM;
> +     }
> +
> +     /* bpf_map_update_elem() can be called in_irq() */
> +     raw_spin_lock_irqsave(&b->lock, flags);
> +
> +     l_old = lookup_elem_raw(head, hash, key, key_size);
> +
> +     ret = check_flags(htab, l_old, map_flags);
> +     if (ret)
> +             goto err;
> +
> +     if (l_old) {
> +             bpf_lru_node_set_ref(&l_old->lru_node);
> +
> +             /* per-cpu hash map can update value in-place */
> +             pcpu_copy_value(htab, htab_elem_get_ptr(l_old, key_size),
> +                             value, onallcpus);
> +     } else {
> +             pcpu_copy_value(htab, htab_elem_get_ptr(l_new, key_size),
> +                             value, onallcpus);
> +             hlist_add_head_rcu(&l_new->hash_node, head);
> +             l_new = NULL;
> +     }
> +     ret = 0;
> +err:
> +     raw_spin_unlock_irqrestore(&b->lock, flags);
> +     if (l_new)
> +             bpf_lru_push_free(&htab->lru, &l_new->lru_node);
> +     return ret;
> +}

definitely tricky code, but all looks correct.
Acked-by: Alexei Starovoitov <a...@kernel.org>

Reply via email to