Wed, Nov 02, 2016 at 03:35:03PM CET, ro...@cumulusnetworks.com wrote: >On 11/2/16, 6:48 AM, Jiri Pirko wrote: >> Wed, Nov 02, 2016 at 02:29:40PM CET, ro...@cumulusnetworks.com wrote: >>> On Wed, Nov 2, 2016 at 12:20 AM, Jiri Pirko <j...@resnulli.us> wrote: >>>> Wed, Nov 02, 2016 at 03:13:42AM CET, ro...@cumulusnetworks.com wrote: >>> [snip] >>> >>>>> I understand..but, if you are adding some core infrastructure for >>>>> switchdev ..it cannot be >>>>> based on the number of simple use-cases or data you have today. >>>>> >>>>> I won't be surprised if tomorrow other switch drivers have a case where >>>>> they need to >>>>> reset the hw routing table state and reprogram all routes again. >>>>> Re-registering the notifier to just >>>>> get the routing state of the kernel will not scale. For the long term, >>>>> since the driver does not maintain a cache, >>>> Driver (mlxsw, rocker) maintain a cache. So I'm not sure why you say >>>> otherwise. >>>> >>>> >>>>> a pull api with efficient use of rtnl will be useful for other such cases >>>>> as well. >>>> How do you imagine this "pull API" should look like? >>> >>> Just like you already have added fib notifiers to parallel fib netlink >>> notifications, the pull API is a parallel to 'netlink dump'. >>> Is my imagination too wild ? :) >> Perhaps I'm slow, but I don't understand what you mean. > >>>>> >>>>> If you don't want to get to the complexity of a new api right away >>>>> because of the >>>>> simple case of management interface routes you have, Can your driver >>>>> register the notifier early ? >>>>> (I am sure you have probably already thought about this) >>>> Register early? What it would resolve? I must be missing something. We >>>> register as early as possible. But the thing is, we cannot register >>>> in a past. And that is what this patch resolves. >>> sure, you must be having a valid problem then. I was just curious why >>> your driver is not up and initialized before any of the addresses or >>> routes get configured in the system (even on a management port). Ours >> If you unload the module and load it again for example. This is a valid >> usecase. > >I see, so you are optimizing for this use case. sure it is a valid use-case >but a narrow one
It is not an optimization, it's a bug fix. >compared to the rtnl overhead the api may bring > (note that i am not saying you should not solve it). > >> >> >>> does. But i agree there can be races and you cannot always guarantee >>> (I was just responding to ido's comment about adding complexity for a >>> small problem he has to solve for management routes). Our driver does >>> a pull before it starts. This helps when we want to reset the hardware >>> routing table state too. >> Can you point me to you driver in the tree? I would like to see how you >> do "the pull". >:), you know all this... but, if i must explicitly say it, yes, we don't have >a driver in the tree and >we don't own the hardware. My analogy here is of a netlink dump that we use >heavily for the >same scale that you will probably deploy. You are comparing netlink kernel-user api with in kernel api. I don't think that is comparable, at all. Therefore I asked how you imagine "the pull" should look like, in kernel. Stating it should look like some user api part does not help me much :( >i do give you full credit for the hardware and the driver and switchdev >support and all that!. > >> >>> >>> But, my point was, when you are defining an API, you cannot quantify >>> the 'past' to be just the very 'close past' or 'the past is just the >>> management routes that were added' . Tomorrow the 'past' can be the >>> full routing table if you need to reset the hardware state. >> Sure. > >This pull api was a suggestion for an efficient use of rtnl ...similar to how >the netlink >routing dump handles it. If you cannot imagine an api like that..., sure, your >call. No, that's why I'm asking, because I was under impression you can imagine that :)