Tested this patch with virtio-net regression tests, everything works fine. Tested-by: Lei Yang <[email protected]>
On Mon, Nov 3, 2025 at 1:59 PM Xuan Zhuo <[email protected]> wrote: > > On Thu, 30 Oct 2025 21:44:38 +0700, Bui Quang Minh <[email protected]> > wrote: > > Since commit 4959aebba8c0 ("virtio-net: use mtu size as buffer length > > for big packets"), when guest gso is off, the allocated size for big > > packets is not MAX_SKB_FRAGS * PAGE_SIZE anymore but depends on > > negotiated MTU. The number of allocated frags for big packets is stored > > in vi->big_packets_num_skbfrags. > > > > Because the host announced buffer length can be malicious (e.g. the host > > vhost_net driver's get_rx_bufs is modified to announce incorrect > > length), we need a check in virtio_net receive path. Currently, the > > check is not adapted to the new change which can lead to NULL page > > pointer dereference in the below while loop when receiving length that > > is larger than the allocated one. > > > > This commit fixes the received length check corresponding to the new > > change. > > > > Fixes: 4959aebba8c0 ("virtio-net: use mtu size as buffer length for big > > packets") > > Cc: [email protected] > > Signed-off-by: Bui Quang Minh <[email protected]> > > Reviewed-by: Xuan Zhuo <[email protected]> > > > --- > > Changes in v7: > > - Fix typos > > - Link to v6: > > https://lore.kernel.org/netdev/[email protected]/ > > Changes in v6: > > - Fix the length check > > - Link to v5: > > https://lore.kernel.org/netdev/[email protected]/ > > Changes in v5: > > - Move the length check to receive_big > > - Link to v4: > > https://lore.kernel.org/netdev/[email protected]/ > > Changes in v4: > > - Remove unrelated changes, add more comments > > - Link to v3: > > https://lore.kernel.org/netdev/[email protected]/ > > Changes in v3: > > - Convert BUG_ON to WARN_ON_ONCE > > - Link to v2: > > https://lore.kernel.org/netdev/[email protected]/ > > Changes in v2: > > - Remove incorrect give_pages call > > - Link to v1: > > https://lore.kernel.org/netdev/[email protected]/ > > --- > > drivers/net/virtio_net.c | 25 ++++++++++++------------- > > 1 file changed, 12 insertions(+), 13 deletions(-) > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > > index a757cbcab87f..421b9aa190a0 100644 > > --- a/drivers/net/virtio_net.c > > +++ b/drivers/net/virtio_net.c > > @@ -910,17 +910,6 @@ static struct sk_buff *page_to_skb(struct virtnet_info > > *vi, > > goto ok; > > } > > > > - /* > > - * Verify that we can indeed put this data into a skb. > > - * This is here to handle cases when the device erroneously > > - * tries to receive more than is possible. This is usually > > - * the case of a broken device. > > - */ > > - if (unlikely(len > MAX_SKB_FRAGS * PAGE_SIZE)) { > > - net_dbg_ratelimited("%s: too much data\n", skb->dev->name); > > - dev_kfree_skb(skb); > > - return NULL; > > - } > > BUG_ON(offset >= PAGE_SIZE); > > while (len) { > > unsigned int frag_size = min((unsigned)PAGE_SIZE - offset, > > len); > > @@ -2107,9 +2096,19 @@ static struct sk_buff *receive_big(struct net_device > > *dev, > > struct virtnet_rq_stats *stats) > > { > > struct page *page = buf; > > - struct sk_buff *skb = > > - page_to_skb(vi, rq, page, 0, len, PAGE_SIZE, 0); > > + struct sk_buff *skb; > > + > > + /* Make sure that len does not exceed the size allocated in > > + * add_recvbuf_big. > > + */ > > + if (unlikely(len > (vi->big_packets_num_skbfrags + 1) * PAGE_SIZE)) { > > + pr_debug("%s: rx error: len %u exceeds allocated size %lu\n", > > + dev->name, len, > > + (vi->big_packets_num_skbfrags + 1) * PAGE_SIZE); > > + goto err; > > + } > > > > + skb = page_to_skb(vi, rq, page, 0, len, PAGE_SIZE, 0); > > u64_stats_add(&stats->bytes, len - vi->hdr_len); > > if (unlikely(!skb)) > > goto err; > > -- > > 2.43.0 > > >

