Hi Andre,

"Guedes, Andre" <andre.gue...@intel.com> writes:

> Hi Vinicius,
>
>> On Sep 23, 2019, at 10:04 PM, Vinicius Costa Gomes 
>> <vinicius.go...@intel.com> wrote:
>> 
>> The problem happens because that when offloading is enabled, the cbs
>> instance is not added to the list.
>> 
>> Also, the current code doesn't handle correctly the case when offload
>> is disabled without removing the qdisc: if the link speed changes the
>> credit calculations will be wrong. When we create the cbs instance
>> with offloading enabled, it's not added to the notification list, when
>> later we disable offloading, it's not in the list, so link speed
>> changes will not affect it.
>> 
>> The solution for both issues is the same, add the cbs instance being
>> created unconditionally to the global list, even if the link state
>> notification isn't useful "right now".
>
> I believe we could fix both issues described above and still don’t
> notify the qdisc about link state if we handled the list
> insertion/removal in cbs_change() instead.
>
> Reading the cbs code more carefully, it seems it would be beneficial
> to refactor the offload handling. For example, we currently init the
> qdisc_watchdog even if it’s not useful when offload is enabled. Now,
> we’re going to notify the qdisc even if it’s not useful too.

I like your idea, but even after reading your email and the code a
couple of times, I couldn't come up with anything quickly that wouldn't
complicate things (i.e. add more code), I would need to experiment a
bit. (btw, qdisc_watchdog_init() is just initializing some fields in a
struct, and the notification part should be quite rare in practice).

So my suggestion is to keep this patch as is, as it solves a real crash
that a colleague faced. Later, we can try and simplify things even more.

Cheers,
--
Vinicius

P.S.: I think I am still a bit traumatized but getting the init() and
destroy() right were the hardest parts when we were trying to uptream
this. That's why I am hesitant about adding more code to those flows.

Reply via email to