Hi,

On Fri, 2017-12-01 at 14:07 -0800, Cong Wang wrote:
> On Fri, Dec 1, 2017 at 3:05 AM, Paolo Abeni <pab...@redhat.com> wrote:
> > 
> > Thank you for the feedback.
> > 
> > I tested your patch and in the above scenario I measure:
> > 
> > real    0m0.017s
> > user    0m0.000s
> > sys     0m0.017s
> > 
> > so it apparently works well for this case.
> 
> Thanks a lot for testing it! I will test it further. If it goes well I will
> send a formal patch with your Tested-by unless you object it.

I'm in late, but I was fine with the above ;)

> > We could still have a storm of rtnl lock/unlock operations while
> > deleting a large tc tree with lot of filters, and I think we can reduce
> > them with bulk free, evenutally applying it to filters, too.
> > 
> > That will also reduce the pressure on the rtnl lock when e.g. OVS H/W
> > offload pushes a lot of rules/sec.
> > 
> > WDYT?
> > 
> 
> Why this is specific to tc filter? From what you are saying, we need to
> batch all TC operations (qdisc, filter and action) rather than just filter?

Exactly, the idea would be to batch all the delayed works. I started
with blocks, to somewhat tackle the issue seen on qdisc removal.

> In short term, I think batching rtnl lock/unlock is a good optimization,
> so I have no objection. For long term, I think we need to revise RTNL
> lock and probably move it down to each layer, but clearly it requires
> much more work.

Agreed!

Paolo

Reply via email to