O(n) is the least that we can have!! Thanks,
Saurabh Gadia On Thu, Jun 25, 2015 at 5:39 PM, Saurabh Gadia <ga...@usc.edu> wrote: > I mean not recursion. just looping till top of stack of held mutexes. > check this out: > https://github.com/saurabhgadia4/lock-model/blob/master/rtems/Mutex.java > -->updateRecPriority() line 143 > > Thanks, > > Saurabh Gadia > > On Thu, Jun 25, 2015 at 5:35 PM, Joel Sherrill <joel.sherr...@oarcorp.com> > wrote: > >> >> >> On June 25, 2015 7:33:24 PM CDT, Saurabh Gadia <ga...@usc.edu> wrote: >> >@joel: It essentially means finding out the highest priority thread (or >> >ceiling) of any remaining mutex held, right? >> > >> >If you are talking this in context of nested mutex problem ---> >> > >> >We never go and find the highest priority thread waiting on held mutex. >> >Inversely highest priority thread will recursively make changes to top >> >of stack of held mutexes by the holder. We don't search. >> >> True recursion? Or implemented with loop? >> >> Ok. Still an O(n) operation. Discuss locking on devel@ >> >> > >> >Thanks, >> > >> > >> >Saurabh Gadia >> > >> > >> >On Thu, Jun 25, 2015 at 5:27 PM, Joel Sherrill >> ><joel.sherr...@oarcorp.com> wrote: >> > >> > >> > >> >On June 25, 2015 7:23:06 PM CDT, Cyrille Artho >> ><cyrille.ar...@gmail.com> wrote: >> >>Hi Gedare and all, >> >>Good news: >> >>In this morning's discussion, we fixed some remaining issues in the >> >>model and now got results. >> >> >> >>The current model shows a case of priority inversion with a simple >> >>priority reset mechanism and the absence of that problem with the >> >>proposed strategy. >> >> >> >>There is only one flaw: We had to use a global lock in the model >> >>(essentially making the entire "lock" and "unlock" operations atomic), >> >>because so far we couldn't find a way to secure lock the operations on >> >>a lower level. The model uses a list of locks held (by a thread) and a >> >>list of threads waiting on a lock (in the lock data structure), so >> >>these aspects are essentially cross-cutting and very hard to get right >> >>without a global lock. >> >> >> >>The RTEMS code uses different macros inside these operations (_Seize, >> >>_Surrender), which do not seem to use a global lock (but I may have >> >>missed something). It is possible that a data race occurs for a >> >>complex lock update strategy, if we don't use a global lock. >> >> >> >>Saurabh will clean up the model and provide documentation with >> >details. >> >> >> >>In the meantime, can you maybe answer these questions for us: >> >> >> >>(1) Did you have any issues with race conditions in the current RTEMS >> >>code (especially recent changes)? >> > >> >We haven't observed any but that doesn't mean there aren't any >> >undiscovered. >> > >> >>(2) Is a fix using a global lock acceptable? Correctness-wise it's the >> >>safest thing, but of course performance-wise it's a bottleneck. >> > >> >The locking strategy needs to be discussed. On a uniprocessor system, >> >these issues tend to be minor and cause delays. On smp systems, they >> >become more complex and performance problems. >> > >> >It essentially means finding out the highest priority thread (or >> >ceiling) of any remaining mutex held, right? >> > >> >Bring this up on devel@ >> > >> > >> >--joel >> >> --joel >> > >
_______________________________________________ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel