On Thu, Jan 25, 2018 at 05:04:46PM +0200, Michael S. Tsirkin wrote:
> The comment near __ptr_ring_peek says:
> 
>  * If ring is never resized, and if the pointer is merely
>  * tested, there's no need to take the lock - see e.g.  __ptr_ring_empty.
> 
> but this was in fact never possible as index gets out of range
> temporarily.
> 
> We tried to allocate one more entry for lockless peeking.
> 
> Turns out some callers relied on alloc to fail when
> given UINT_MAX - adding 1 causes an
> overflow which causes zero to be passed to kmalloc().
> 
> In this case, it returns ZERO_SIZE_PTR which looks like a valid
> pointer to ptr ring - which then crashes on dereference.
> 
> To fix, keep consumer index valid at all times.
> 
> Fixes: bcecb4bbf88a ("net: ptr_ring: otherwise safe empty checks can overrun 
> array bounds")
> Fixes: c5ad119fb6c09 ("net: sched: pfifo_fast use skb_array")
> Reported-by:syzbot+87678bcf753b44c39...@syzkaller.appspotmail.com
> Reported-by: Jason Wang<jasow...@redhat.com>
> Signed-off-by: Michael S. Tsirkin <m...@redhat.com>

The patch is good but the commit log is all wrong.
NACK and I'll repost is properly ASAP.

> ---
>  include/linux/ptr_ring.h | 25 ++++++++++++++++---------
>  1 file changed, 16 insertions(+), 9 deletions(-)
> 
> diff --git a/include/linux/ptr_ring.h b/include/linux/ptr_ring.h
> index 37b4bb2..802375f 100644
> --- a/include/linux/ptr_ring.h
> +++ b/include/linux/ptr_ring.h
> @@ -236,22 +236,28 @@ static inline void __ptr_ring_discard_one(struct 
> ptr_ring *r)
>       /* Fundamentally, what we want to do is update consumer
>        * index and zero out the entry so producer can reuse it.
>        * Doing it naively at each consume would be as simple as:
> -      *       r->queue[r->consumer++] = NULL;
> -      *       if (unlikely(r->consumer >= r->size))
> -      *               r->consumer = 0;
> +      *       consumer = r->consumer;
> +      *       r->queue[consumer++] = NULL;
> +      *       if (unlikely(consumer >= r->size))
> +      *               consumer = 0;
> +      *       r->consumer = consumer;
>        * but that is suboptimal when the ring is full as producer is writing
>        * out new entries in the same cache line.  Defer these updates until a
>        * batch of entries has been consumed.
>        */
> -     int head = r->consumer_head++;
> +     /* Note: we must keep consumer_head valid at all times for 
> __ptr_ring_peek
> +      * to work correctly.
> +      */
> +     int consumer_head = r->consumer_head;
> +     int head = consumer_head++;
>  
>       /* Once we have processed enough entries invalidate them in
>        * the ring all at once so producer can reuse their space in the ring.
>        * We also do this when we reach end of the ring - not mandatory
>        * but helps keep the implementation simple.
>        */
> -     if (unlikely(r->consumer_head - r->consumer_tail >= r->batch ||
> -                  r->consumer_head >= r->size)) {
> +     if (unlikely(consumer_head - r->consumer_tail >= r->batch ||
> +                  consumer_head >= r->size)) {
>               /* Zero out entries in the reverse order: this way we touch the
>                * cache line that producer might currently be reading the last;
>                * producer won't make progress and touch other cache lines
> @@ -259,12 +265,13 @@ static inline void __ptr_ring_discard_one(struct 
> ptr_ring *r)
>                */
>               while (likely(head >= r->consumer_tail))
>                       r->queue[head--] = NULL;
> -             r->consumer_tail = r->consumer_head;
> +             r->consumer_tail = consumer_head;
>       }
> -     if (unlikely(r->consumer_head >= r->size)) {
> -             r->consumer_head = 0;
> +     if (unlikely(consumer_head >= r->size)) {
> +             consumer_head = 0;
>               r->consumer_tail = 0;
>       }
> +     r->consumer_head = consumer_head;
>  }
>  
>  static inline void *__ptr_ring_consume(struct ptr_ring *r)
> -- 
> MST

Reply via email to