On 2020-10-13 14:44:19 [+0200], Eelco Chaudron wrote: > The flow_lookup() function uses per CPU variables, which must not be > preempted. However, this is fine in the general napi use case where > the local BH is disabled. But, it's also called in the netlink > context, which is preemptible. The below patch makes sure that even > in the netlink path, preemption is disabled. > > Fixes: eac87c413bf9 ("net: openvswitch: reorder masks array based on usage") > Reported-by: Juri Lelli <jle...@redhat.com> > Signed-off-by: Eelco Chaudron <echau...@redhat.com> > --- > net/openvswitch/flow_table.c | 10 +++++++++- > 1 file changed, 9 insertions(+), 1 deletion(-) > > diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c > index 87c286ad660e..16289386632b 100644 > --- a/net/openvswitch/flow_table.c > +++ b/net/openvswitch/flow_table.c > @@ -850,9 +850,17 @@ struct sw_flow *ovs_flow_tbl_lookup(struct flow_table > *tbl, > struct mask_array *ma = rcu_dereference_ovsl(tbl->mask_array); > u32 __always_unused n_mask_hit; > u32 __always_unused n_cache_hit; > + struct sw_flow *flow; > u32 index = 0; > > - return flow_lookup(tbl, ti, ma, key, &n_mask_hit, &n_cache_hit, &index); > + /* This function gets called trough the netlink interface and therefore > + * is preemptible. However, flow_lookup() function needs to be called > + * with preemption disabled due to CPU specific variables. > + */
Once again. u64_stats_update_begin(). What protects you against concurrent access. > + preempt_disable(); > + flow = flow_lookup(tbl, ti, ma, key, &n_mask_hit, &n_cache_hit, &index); > + preempt_enable(); > + return flow; > } > > struct sw_flow *ovs_flow_tbl_lookup_exact(struct flow_table *tbl, > Sebastian