Re: High CPU load by native_queued_spin_lock_slowpath

2017-11-12 Thread Sergey K.
After 1 month and 2 weeks I found a solution :)! The main idea is to redirect outgoing traffic to ifb device from every queue of the real eth interface. Example: tc qdisc add dev eth0 root handle 1: mq tc qdisc add dev eth0 parent 1:1 handle 8001: htb tc filter add dev eth0 parent 8001: u32 .

Re: High CPU load by native_queued_spin_lock_slowpath

2017-10-10 Thread Sergey K.
I'm using ifb0 device for outgoing traffic. I have one bond0 interface with exit to the Internet, and 2 interfaces eth0 and eth2 to local users. ifb0 - for shaping Internet traffic from bond0 to eth2 or eth0. All outgoing traffic to the eth0 and eth2 redirecting to ifb0. > What about multiple ifb

Re: High CPU load by native_queued_spin_lock_slowpath

2017-10-10 Thread Eric Dumazet
On Tue, 2017-10-10 at 18:00 +0600, Sergey K. wrote: > I'm using Debian 9(stretch edition) kernel 4.9., hp dl385 g7 server > with 32 cpu cores. NIC queues are tied to processor cores. Server is > shaping traffic (iproute2 and htb discipline + skbinfo + ipset + ifb) > and filtering some rules by ipta

High CPU load by native_queued_spin_lock_slowpath

2017-10-10 Thread Sergey K.
I'm using Debian 9(stretch edition) kernel 4.9., hp dl385 g7 server with 32 cpu cores. NIC queues are tied to processor cores. Server is shaping traffic (iproute2 and htb discipline + skbinfo + ipset + ifb) and filtering some rules by iptables. At that moment, when traffic goes up about 1gbit/s cp