>> -static inline int dev_requeue_skb(struct sk_buff *skb, struct Qdisc *q)
>> +static inline int __dev_requeue_skb(struct sk_buff *skb, struct Qdisc *q)
>> {
>
> Perhaps dev_requeue_skb_qdisc_locked is more descriptive. Or
> adding a lockdep_is_held(..) also documents that the __locked variant
> below is not just a lock/unlock wrapper around this inner function.
>
>> - q->gso_skb = skb;
>> + __skb_queue_head(&q->gso_skb, skb);
>> q->qstats.requeues++;
>> qdisc_qstats_backlog_inc(q, skb);
>> q->q.qlen++; /* it's still part of the queue */
>> @@ -57,6 +56,30 @@ static inline int dev_requeue_skb(struct sk_buff *skb,
>> struct Qdisc *q)
>> return 0;
>> }
>>
>> +static inline int dev_requeue_skb_locked(struct sk_buff *skb, struct Qdisc
>> *q)
>> +{
>> + spinlock_t *lock = qdisc_lock(q);
>> +
>> + spin_lock(lock);
>> + __skb_queue_tail(&q->gso_skb, skb);
>
> why does this requeue at the tail, unlike __dev_requeue_skb?
I guess that requeue has to queue at the tail in the lockless case,
and it does not matter in the qdisc_locked case, as then there can
only ever be at most one outstanding gso_skb.