>> +static void virtnet_poll_cleantx(struct receive_queue *rq)
>> +{
>> +       struct virtnet_info *vi = rq->vq->vdev->priv;
>> +       unsigned int index = vq2rxq(rq->vq);
>> +       struct send_queue *sq = &vi->sq[index];
>> +       struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, index);
>> +
>> +       __netif_tx_lock(txq, smp_processor_id());
>> +       free_old_xmit_skbs(sq, sq->napi.weight);
>> +       __netif_tx_unlock(txq);
>
>
> Should we check tx napi weight here? Or this was treated as an independent
> optimization?

Good point. This was not intended to run in no-napi mode as is.
With interrupts disabled most of the time in that mode, I don't
expect it to be worthwhile using in that case. I'll add the check
for sq->napi.weight != 0.

>> +
>> +       if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS)
>> +               netif_wake_subqueue(vi->dev, vq2txq(sq->vq));
>> +}
>> +
>>   static int virtnet_poll(struct napi_struct *napi, int budget)
>>   {
>>         struct receive_queue *rq =
>> @@ -1039,6 +1056,8 @@ static int virtnet_poll(struct napi_struct *napi,
>> int budget)
>>         received = virtnet_receive(rq, budget);
>>   +     virtnet_poll_cleantx(rq);
>> +
>
>
> Better to do the before virtnet_receive() consider refill may allocate
> memory for rx buffers.

Will do.

> Btw, if this is proved to be more efficient. In the future we may consider
> to:
>
> 1) use a single interrupt for both rx and tx
> 2) use a single napi to handle both rx and tx

Agreed, I think that's sensible.

Reply via email to