> >>>> Many of the ASIC's internal resources are limited and are shared
> between
> >>>> several hardware procedures. For example, unified hash-based memory
> can
> >>>> be used for many lookup purposes, like FDB and LPM. In many cases the
> user
> >>>> can provide a partitioning scheme for such a resource in order to
> perform
> >>>> fine tuning for his application. In such cases performing driver reload 
> >>>> is
> >>>> needed for the changes to take place, thus this patchset also adds
> support
> >>>> for hot reload.
> >>>>
> >>>> Such an abstraction can be coupled with devlink's dpipe interface, which
> >>>> models the ASIC's pipeline as a graph of match/action tables. By
> modeling
> >>>> the hardware resource object, and by coupling it to several dpipe tables,
> >>>> further visibility can be achieved in order to debug ASIC-wide issues.
> >>>>
> >>>> The proposed interface will provide the user the ability to understand
> the
> >>>> limitations of the hardware, and receive notification regarding its
> occupancy.
> >>>> Furthermore, monitoring the resource occupancy can be done in real-
> time and
> >>>> can be useful in many cases.
> >>>
> >>> In the last RFC (not v1, but RFC) I asked for some kind of description
> >>> for each resource, and you and Arkadi have pushed back. Let's walk
> >>> through an example to see what I mean:
> >>>
> >>> $ devlink resource show pci/0000:03:00.0
> >>> pci/0000:03:00.0:
> >>>  name kvd size 245760 size_valid true
> >>>  resources:
> >>>    name linear size 98304 occ 0
> >>>    name hash_double size 60416
> >>>    name hash_single size 87040
> >>>
> >>> So this 2700 has 3 resources that can be managed -- some table or
> >>> resource or something named 'kvd' with linear, hash_double and
> >>> hash_single sub-resources. What are these names referring too? The
> above
> >>> output gives no description, and 'kvd' is not an industry term. Further,
> >>
> >> This are internal resources specific to the ASIC. Would you like some
> >> description to each or something like that?
> >
> > devlink has some nice self-documenting capabilities. What's missing here
> > is a description of what the resource is used for in standard terms --
> > ipv4 host routes, fdb, nexthops, rifs, etc. Even if the description is a
> > short list versus an exhaustive list of everything it is used for. e.g.,
> > Why would a user decrease linear and increase hash_single or vice versa?
> 
> 
> Arkadi, on what david says above, can the resource names and ids not
> be driver specific, but moved up to the switchdev layer and just map
> to fdb, host routes, nexthops table sizes etc ?.
> Can these generic networking resources then in-turn be mapped to kvd
> sizes by the driver ?

I think it goes the other way around. The dpipe tables are the ones that
can be translated to functionality; The resources are internal and HW-specific
representing the possible internal division of resources -
but a given resource sn't necessarily mapped to a single networking feature.
[It might be in some cases, but not in the general case]

You could always move to a structured approach where each resource
in the hierarchy is further split to sub-resources until each leaf represents
a single network concepts - but that would stop be an abstraction of the
HW resources and become a SW implementation instead, as SW would
have to be the one to maintain and enforce the resource distribution.
And that's not what we're trying to achieve here.

Reply via email to