On Tue, Feb 2, 2021 at 11:08 AM Loic Poulain <loic.poul...@linaro.org> wrote: > > When device side MTU is larger than host side MTU, the packets > (typically rmnet packets) are split over multiple MHI transfers. > In that case, fragments must be re-aggregated to recover the packet > before forwarding to upper layer. > > A fragmented packet result in -EOVERFLOW MHI transaction status for > each of its fragments, except the final one. Such transfer was > previoulsy considered as error and fragments were simply dropped.
nit: previously > > This change adds re-aggregation mechanism using skb chaining, via > skb frag_list. > > A warning (once) is printed since this behavior usually comes from > a misconfiguration of the device (e.g. modem MTU). > > Signed-off-by: Loic Poulain <loic.poul...@linaro.org> Only one real question wrt stats. Otherwise looks good to me, thanks. > --- > v2: use zero-copy skb chaining instead of skb_copy_expand. > > drivers/net/mhi_net.c | 79 > ++++++++++++++++++++++++++++++++++++++++++++------- > 1 file changed, 69 insertions(+), 10 deletions(-) > > diff --git a/drivers/net/mhi_net.c b/drivers/net/mhi_net.c > index 4f512531..be39779 100644 > --- a/drivers/net/mhi_net.c > +++ b/drivers/net/mhi_net.c > @@ -32,6 +32,8 @@ struct mhi_net_stats { > struct mhi_net_dev { > struct mhi_device *mdev; > struct net_device *ndev; > + struct sk_buff *skbagg_head; > + struct sk_buff *skbagg_tail; > struct delayed_work rx_refill; > struct mhi_net_stats stats; > u32 rx_queue_sz; > @@ -132,6 +134,37 @@ static void mhi_net_setup(struct net_device *ndev) > ndev->tx_queue_len = 1000; > } > > +static struct sk_buff *mhi_net_skb_agg(struct mhi_net_dev *mhi_netdev, > + struct sk_buff *skb) > +{ > + struct sk_buff *head = mhi_netdev->skbagg_head; > + struct sk_buff *tail = mhi_netdev->skbagg_tail; > + > + /* This is non-paged skb chaining using frag_list */ > + no need for empty line? > + if (!head) { > + mhi_netdev->skbagg_head = skb; > + return skb; > + } > + > + if (!skb_shinfo(head)->frag_list) > + skb_shinfo(head)->frag_list = skb; > + else > + tail->next = skb; > + > + /* data_len is normally the size of paged data, in our case there is > no data_len is defined as the data excluding the linear len (ref: skb_headlen). That is not just paged data, but includes frag_list. > + * paged data (nr_frags = 0), so it represents the size of chained > skbs, > + * This way, net core will consider skb->frag_list. > + */ > + head->len += skb->len; > + head->data_len += skb->len; > + head->truesize += skb->truesize; > + > + mhi_netdev->skbagg_tail = skb; > + > + return mhi_netdev->skbagg_head; > +} > + > static void mhi_net_dl_callback(struct mhi_device *mhi_dev, > struct mhi_result *mhi_res) > { > @@ -142,19 +175,42 @@ static void mhi_net_dl_callback(struct mhi_device > *mhi_dev, > free_desc_count = mhi_get_free_desc_count(mhi_dev, DMA_FROM_DEVICE); > > if (unlikely(mhi_res->transaction_status)) { > - dev_kfree_skb_any(skb); > - > - /* MHI layer stopping/resetting the DL channel */ > - if (mhi_res->transaction_status == -ENOTCONN) > + switch (mhi_res->transaction_status) { > + case -EOVERFLOW: > + /* Packet can not fit in one MHI buffer and has been > + * split over multiple MHI transfers, do > re-aggregation. > + * That usually means the device side MTU is larger > than > + * the host side MTU/MRU. Since this is not optimal, > + * print a warning (once). > + */ > + netdev_warn_once(mhi_netdev->ndev, > + "Fragmented packets received, fix > MTU?\n"); > + skb_put(skb, mhi_res->bytes_xferd); > + mhi_net_skb_agg(mhi_netdev, skb); > + break; > + case -ENOTCONN: > + /* MHI layer stopping/resetting the DL channel */ > + dev_kfree_skb_any(skb); > return; > - > - u64_stats_update_begin(&mhi_netdev->stats.rx_syncp); > - u64_stats_inc(&mhi_netdev->stats.rx_errors); > - u64_stats_update_end(&mhi_netdev->stats.rx_syncp); > + default: > + /* Unknown error, simply drop */ > + dev_kfree_skb_any(skb); > + u64_stats_update_begin(&mhi_netdev->stats.rx_syncp); > + u64_stats_inc(&mhi_netdev->stats.rx_errors); > + u64_stats_update_end(&mhi_netdev->stats.rx_syncp); > + } > } else { > + skb_put(skb, mhi_res->bytes_xferd); > + > + if (mhi_netdev->skbagg_head) { > + /* Aggregate the final fragment */ > + skb = mhi_net_skb_agg(mhi_netdev, skb); > + mhi_netdev->skbagg_head = NULL; > + } > + > u64_stats_update_begin(&mhi_netdev->stats.rx_syncp); > u64_stats_inc(&mhi_netdev->stats.rx_packets); > - u64_stats_add(&mhi_netdev->stats.rx_bytes, > mhi_res->bytes_xferd); > + u64_stats_add(&mhi_netdev->stats.rx_bytes, skb->len); might this change stats? it will if skb->len != 0 before skb_put. Even if so, perhaps it doesn't matter.