On Wed, Nov 12, 2025 at 7:08 AM Saket Kumar Bhaskar <[email protected]> wrote: > > On Tue, Nov 11, 2025 at 10:35:39AM -0800, Alexei Starovoitov wrote: > > On Tue, Nov 11, 2025 at 6:33 AM Saket Kumar Bhaskar <[email protected]> > > wrote: > > > > > > On Thu, Nov 06, 2025 at 09:15:39AM -0800, Alexei Starovoitov wrote: > > > > On Wed, Nov 5, 2025 at 9:26 PM Saket Kumar Bhaskar > > > > <[email protected]> wrote: > > > > > > > > > > Since commit 31158ad02ddb ("rqspinlock: Add deadlock detection and > > > > > recovery") > > > > > the updated path on re-entrancy now reports deadlock via > > > > > -EDEADLK instead of the previous -EBUSY. > > > > > > > > > > The selftest is updated to align with expected errno > > > > > with the kernel’s current behavior. > > > > > > > > > > Signed-off-by: Saket Kumar Bhaskar <[email protected]> > > > > > --- > > > > > tools/testing/selftests/bpf/prog_tests/htab_update.c | 2 +- > > > > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > > > > > > > diff --git a/tools/testing/selftests/bpf/prog_tests/htab_update.c > > > > > b/tools/testing/selftests/bpf/prog_tests/htab_update.c > > > > > index 2bc85f4814f4..98d52bb1446f 100644 > > > > > --- a/tools/testing/selftests/bpf/prog_tests/htab_update.c > > > > > +++ b/tools/testing/selftests/bpf/prog_tests/htab_update.c > > > > > @@ -40,7 +40,7 @@ static void test_reenter_update(void) > > > > > if (!ASSERT_OK(err, "add element")) > > > > > goto out; > > > > > > > > > > - ASSERT_EQ(skel->bss->update_err, -EBUSY, "no reentrancy"); > > > > > + ASSERT_EQ(skel->bss->update_err, -EDEADLK, "no reentrancy"); > > > > > > > > Makes sense, but looks like the test was broken for quite some time. > > > > It fails with > > > > /* lookup_elem_raw() may be inlined and find_kernel_btf_id() > > > > will return -ESRCH */ > > > > bpf_program__set_autoload(skel->progs.lookup_elem_raw, true); > > > > err = htab_update__load(skel); > > > > if (!ASSERT_TRUE(!err || err == -ESRCH, "htab_update__load") || > > > > err) > > > > > > > > before reaching deadlk check. > > > > Pls make it more robust. > > > > __pcpu_freelist_pop() might be better alternative then > > > > lookup_elem_raw(). > > > > > > > > pw-bot: cr > > > > > > Hi Alexei, > > > > > > I tried for __pcpu_freelist_pop, looks like it is not good candidate to > > > attach fentry for, as it is non traceable: > > > > > > trace_kprobe: Could not probe notrace function __pcpu_freelist_pop > > > > > > I wasn't able to find any other function for this. > > > > alloc_htab_elem() is not inlined for me. > > bpf_obj_free_fields() would be another option. > Since alloc_htab_elem() is a static function, wouldn’t its > inlining behavior be compiler-dependent?
of course. Just like lookup_elem_raw(), but alloc is much bigger and less likely to be inlined. > static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key, > void *value, u32 key_size, u32 hash, > bool percpu, bool onallcpus, > struct htab_elem *old_elem) > > When the fentry program is instead attached to bpf_obj_free_fields(), > the bpf_map_update_elem() call returns 0 rather than -EDEADLK, > because bpf_obj_free_fields() is not invoked in the bpf_map_update_elem() > re-entrancy path: Then make it so. Think what you need to do to make check_and_free_fields() call it.

