On 6/17/2020 4:38 PM, Pablo Neira Ayuso wrote:
> On Wed, Jun 17, 2020 at 11:36:19AM +0800, wenxu wrote:
>> On 6/17/2020 4:38 AM, Pablo Neira Ayuso wrote:
>>> On Tue, Jun 16, 2020 at 05:47:17PM +0200, Simon Horman wrote:
>>>> On Tue, Jun 16, 2020 at 11:18:16PM +0800, wenxu wrote:
>>>>> 在 2020/6/16 22:34, Simon Horman 写道:
>>>>>> On Tue, Jun 16, 2020 at 10:20:46PM +0800, wenxu wrote:
>>>>>>> 在 2020/6/16 18:51, Simon Horman 写道:
>>>>>>>> On Tue, Jun 16, 2020 at 11:19:38AM +0800, we...@ucloud.cn wrote:
>>>>>>>>> From: wenxu <we...@ucloud.cn>
>>>>>>>>>
>>>>>>>>> In the function __flow_block_indr_cleanup, The match stataments
>>>>>>>>> this->cb_priv == cb_priv is always false, the flow_block_cb->cb_priv
>>>>>>>>> is totally different data with the flow_indr_dev->cb_priv.
>>>>>>>>>
>>>>>>>>> Store the representor cb_priv to the flow_block_cb->indr.cb_priv in
>>>>>>>>> the driver.
>>>>>>>>>
>>>>>>>>> Fixes: 1fac52da5942 ("net: flow_offload: consolidate indirect 
>>>>>>>>> flow_block infrastructure")
>>>>>>>>> Signed-off-by: wenxu <we...@ucloud.cn>
>>>>>>>> Hi Wenxu,
>>>>>>>>
>>>>>>>> I wonder if this can be resolved by using the cb_ident field of struct
>>>>>>>> flow_block_cb.
>>>>>>>>
>>>>>>>> I observe that mlx5e_rep_indr_setup_block() seems to be the only 
>>>>>>>> call-site
>>>>>>>> where the value of the cb_ident parameter of flow_block_cb_alloc() is
>>>>>>>> per-block rather than per-device. So part of my proposal is to change
>>>>>>>> that.
>>>>>>> I check all the xxdriver_indr_setup_block. It seems all the cb_ident 
>>>>>>> parameter of
>>>>>>>
>>>>>>> flow_block_cb_alloc is per-block. Both in the 
>>>>>>> nfp_flower_setup_indr_tc_block
>>>>>>>
>>>>>>> and bnxt_tc_setup_indr_block.
>>>>>>>
>>>>>>>
>>>>>>> nfp_flower_setup_indr_tc_block:
>>>>>>>
>>>>>>> struct nfp_flower_indr_block_cb_priv *cb_priv;
>>>>>>>
>>>>>>> block_cb = flow_block_cb_alloc(nfp_flower_setup_indr_block_cb,
>>>>>>>                                                cb_priv, cb_priv,
>>>>>>>                                                
>>>>>>> nfp_flower_setup_indr_tc_release);
>>>>>>>
>>>>>>>
>>>>>>> bnxt_tc_setup_indr_block:
>>>>>>>
>>>>>>> struct bnxt_flower_indr_block_cb_priv *cb_priv;
>>>>>>>
>>>>>>> block_cb = flow_block_cb_alloc(bnxt_tc_setup_indr_block_cb,
>>>>>>>                                                cb_priv, cb_priv,
>>>>>>>                                                bnxt_tc_setup_indr_rel);
>>>>>>>
>>>>>>>
>>>>>>> And the function flow_block_cb_is_busy called in most place. Pass the
>>>>>>>
>>>>>>> parameter as cb_priv but not cb_indent .
>>>>>> Thanks, I see that now. But I still think it would be useful to 
>>>>>> understand
>>>>>> the purpose of cb_ident. It feels like it would lead to a clean solution
>>>>>> to the problem you have highlighted.
>>>>> I think The cb_ident means identify.  It is used to identify the each 
>>>>> flow block cb.
>>>>>
>>>>> In the both flow_block_cb_is_busy and flow_block_cb_lookup function check
>>>>>
>>>>> the block_cb->cb_ident == cb_ident.
>>>> Thanks, I think that I now see what you mean about the different scope of
>>>> cb_ident and your proposal to allow cleanup by flow_indr_dev_unregister().
>>>>
>>>> I do, however, still wonder if there is a nicer way than reaching into
>>>> the structure and manually setting block_cb->indr.cb_priv
>>>> at each call-site.
>>>>
>>>> Perhaps a variant of flow_block_cb_alloc() for indirect blocks
>>>> would be nicer?
>>> A follow up patch to add this new variant would be good. Probably
>>> __flow_block_indr_binding() can go away with this new variant to set
>>> up the indirect flow block.
>>
>> Maybe __flow_block_indr_binding() can't go away. The data and cleanup 
>> callback which should
>> init the flow_block_indr is only in the flow_indr_dev_setup_offload. This 
>> can't be gotten in
>> flow_indr_block_cb_alloc.
> Probably flow_indr_block_bind_cb_t can be updated to include the data
> and the cleanup callback.

Yes this can setup the indr_block info in the flow_indr_block_cb_alloc.

it also needs a flow_indr_block_cb_remove to handle the UNBIND setup.

>

Reply via email to