On Thu, 2017-11-30 at 23:14 -0800, Cong Wang wrote:
> On Wed, Nov 29, 2017 at 6:25 AM, Paolo Abeni <pab...@redhat.com> wrote:
> > Currently deleting qdisc with a large number of children and filters
> > can take a lot of time:
> > 
> > tc qdisc add dev lo root htb
> > for I in `seq 1 1000`; do
> >         tc class add dev lo parent 1: classid 1:$I htb rate 100kbit
> >         tc qdisc add dev lo parent 1:$I handle $((I + 1)): htb
> >         for J in `seq 1 10`; do
> >                 tc filter add dev lo parent $((I + 1)): u32 match ip src 
> > 1.1.1.$J
> >         done
> > done
> > time tc qdisc del dev root
> > 
> > real    0m54.764s
> > user    0m0.023s
> > sys     0m0.000s
> > 
> > This is due to the multiple rcu_barrier() calls, one for each tcf_block
> > freed, invoked with the rtnl lock held. Most other network related
> > tasks will block in this timespan.
> 
> Yeah, Eric pointed out this too and I already had an idea to cure
> this.
> 
> As I already mentioned before, my idea is to refcnt the tcf block
> so that we don't need to worry about which is the last. Something
> like the attached patch below, note it is PoC _only_, not even
> compiled yet. And I am not 100% sure it works either, I will look
> deeper tomorrow.

Thank you for the feedback.

I tested your patch and in the above scenario I measure:

real    0m0.017s
user    0m0.000s
sys     0m0.017s

so it apparently works well for this case.

We could still have a storm of rtnl lock/unlock operations while
deleting a large tc tree with lot of filters, and I think we can reduce
them with bulk free, evenutally applying it to filters, too. 

That will also reduce the pressure on the rtnl lock when e.g. OVS H/W
offload pushes a lot of rules/sec.

WDYT?

Cheers,

Paolo

Reply via email to