On Mon, Jan 23, 2017 at 2:05 PM, Eric Dumazet <eric.duma...@gmail.com> wrote:
> On Mon, 2017-01-23 at 13:46 -0800, Xiangning Yu wrote:
>> On Mon, Jan 23, 2017 at 12:56 PM, Cong Wang <xiyou.wangc...@gmail.com> wrote:
>> > On Mon, Jan 23, 2017 at 10:46 AM, Xiangning Yu <yuxiangn...@gmail.com> 
>> > wrote:
>> >> Hi netdev folks,
>> >>
>> >> It looks like we call dev_forward_skb() in veth_xmit(), which calls
>> >> netif_rx() eventually.
>> >>
>> >> While netif_rx() will enqueue the skb to the CPU RX backlog before the
>> >> actual processing takes place. So, this actually means a TX skb has to
>> >> wait some un-related RX skbs to finish. And this will happen twice for
>> >> a single ping, because the veth device always works as a pair?
>> >
>> > For me it is more like for the completeness of network stack of each
>> > netns. The /proc net.core.netdev_max_backlog etc. are per netns, which
>> > means each netns, as an independent network stack, should respect it
>> > too.
>> >
>> > Since you care about latency, why not tune net.core.dev_weight for your
>> > own netns?
>>
>> I haven't tried that yet, thank you for the hint! Though normally one
>> of the veth device will be in the global namespace.
>
> Well, per cpu backlog are not per net ns, but per cpu.

Right, they all point to a same weight_p. But weight_p itself
is not per cpu, softnet_data is. However, if a container uses
cpuset and netns for isolations, per cpu will turn into per netns.

The point of network stack completeness still stands.

Reply via email to