>>> +{
>>> + struct net_device *dev = skb->dev;
>>> + struct sk_buff *orig_skb = skb;
>>> + struct netdev_queue *txq;
>>> + int ret = NETDEV_TX_BUSY;
>>> + bool again = false;
>>> +
>>> + if (unlikely(!netif_running(dev) || !netif_carrier_ok(dev)))
>>> + goto drop;
>>> +
>>> + skb = validate_xmit_skb_list(skb, dev, &again);
>>> + if (skb != orig_skb)
>>> + return NET_XMIT_DROP;
>>
>> Need to free generated segment list on error, see packet_direct_xmit.
>
> I do not use segments in the TX code for reasons of simplicity and the
> free is in the calling function. But as I will create a common
> packet_direct_xmit according to your suggestion, it will have a
> kfree_skb_list() there as in af_packet.c.
Ah yes. For these sockets it is guaranteed that sbks are not gso skbs.
Of course, makes sense.
>> static inline struct xdp_desc *xskq_peek_desc(struct xsk_queue *q,
>> + struct xdp_desc *desc)
>> +{
>> + struct xdp_rxtx_ring *ring;
>> +
>> + if (q->cons_tail == q->cons_head) {
>> + WRITE_ONCE(q->ring->consumer, q->cons_tail);
>> + q->cons_head = q->cons_tail + xskq_nb_avail(q,
>> RX_BATCH_SIZE);
>> +
>> + /* Order consumer and data */
>> + smp_rmb();
>> +
>> + return xskq_validate_desc(q, desc);
>> + }
>> +
>> + ring = (struct xdp_rxtx_ring *)q->ring;
>> + *desc = ring->desc[q->cons_tail & q->ring_mask];
>> + return desc;
>>
>> This only validates descriptors if taking the branch.
>
> Yes, that is because we only want to validate the descriptors once
> even if we call this function multiple times for the same entry.
Then I am probably misreading this function. But isn't head increased
by up to RX_BATCH_SIZE frames at once. If so, then for many frames
the branch is not taken.