On Thu, Oct 22, 2020 at 06:01:27PM +0800, Zhuoliang Zhang wrote: > From: zhuoliang zhang <zhuoliang.zh...@mediatek.com> > > we found that the following race condition exists in > xfrm_alloc_userspi flow: > > user thread state_hash_work thread > ---- ---- > xfrm_alloc_userspi() > __find_acq_core() > /*alloc new xfrm_state:x*/ > xfrm_state_alloc() > /*schedule state_hash_work thread*/ > xfrm_hash_grow_check() xfrm_hash_resize() > xfrm_alloc_spi /*hold lock*/ > x->id.spi = htonl(spi) > spin_lock_bh(&net->xfrm.xfrm_state_lock) > /*waiting lock release*/ xfrm_hash_transfer() > spin_lock_bh(&net->xfrm.xfrm_state_lock) /*add x into > hlist:net->xfrm.state_byspi*/ > > hlist_add_head_rcu(&x->byspi) > > spin_unlock_bh(&net->xfrm.xfrm_state_lock) > > /*add x into hlist:net->xfrm.state_byspi 2 times*/ > hlist_add_head_rcu(&x->byspi) > > 1. a new state x is alloced in xfrm_state_alloc() and added into the bydst > hlist > in __find_acq_core() on the LHS; > 2. on the RHS, state_hash_work thread travels the old bydst and tranfers > every xfrm_state > (include x) into the new bydst hlist and new byspi hlist; > 3. user thread on the LHS gets the lock and adds x into the new byspi hlist > again. > > So the same xfrm_state (x) is added into the same list_hash > (net->xfrm.state_byspi) 2 times that makes the list_hash become > an inifite loop. > > To fix the race, x->id.spi = htonl(spi) in the xfrm_alloc_spi() is moved > to the back of spin_lock_bh, sothat state_hash_work thread no longer add x > which id.spi is zero into the hash_list. > > Fixes: f034b5d4efdf ("[XFRM]: Dynamic xfrm_state hash table sizing.") > Signed-off-by: zhuoliang zhang <zhuoliang.zh...@mediatek.com>
Applied, thanks a lot! One remark, please don't use base64 encoding when you send patches. I had to hand edit your patch to get it applied.