On 29/7/21 9:42 am, Gedare Bloom wrote: > On Wed, Jul 28, 2021 at 5:05 PM Chris Johns <chr...@rtems.org> wrote: >> On 29/7/21 12:03 am, Gedare Bloom wrote: >>> On Tue, Jul 27, 2021 at 2:59 AM <chr...@rtems.org> wrote: >>>> From: Chris Johns <chr...@rtems.org> >>>> +static int >>>> +lockmgr_upgrade(struct lock *lk, u_int flags, struct lock_object *ilk, >>>> + const char *file, int line) >>>> +{ >>>> + uintptr_t x, v; >>>> + int error = 0; >>>> + LOCK_LOG_LOCK("XUPGRADE", &lk->lock_object, 0, 0, file, line); >>>> + lockmgr_trace("xupgrade", 'I', lk); >>>> + v = lk->lk_lock; >>>> + x = v & ~LK_SHARE; >>>> + atomic_store_rel_ptr(&lk->lk_lock, x); >>> >>> I think this function is to upgrade a shared (waiters/spinner) lock to >>> non-shared (spinner) lock? I'm not confident about correctness here. >>> It looks to me like there can be a race condition if multiple waiters >>> attempt to upgrade in parallel. It works by a preemption after the >>> first waiter loads `v = lk->lk_lock` and then another waiter loads >>> `v`. Actually, there doesn't seem to be anything that prevents two >>> separate calls to upgrade the lock by different waiters in sequence >>> either. I could be wrong (of course). >> >> We do not support shared locks and we do not share specific paths through >> locks >> so while the code may request a shared lock it is not shared. >> > ok that is helpful. > >> You could be correct and I welcome you reviewing the code in the FB kernel >> this >> is based on. >> > I took a quick look.
Thanks > the FB kernel uses spinning and > atomic_fcmpset_ptr instead, which I think exactly addresses this > problem. Is this something we should do or is this something we should not be worried about? Chris _______________________________________________ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel