PR2183 and affinity migration
Hi I think we are seeing this bug based on some detection code the Daniels added. They can explain the detection code and send you a patch but it basically verifies that the value of is_executing is as expected when you begin to restore the heir. If not, it traps. We don't see this under normal block/unblock operations but there is a "check for migrations" pass at the end of those and affinity changes which does a check if the highest priority ready thread could displace a scheduled thread. When this pass runs, we end up wanting to move an executing thread from one core to another. This seems to very reliably trip their check. I wondered if we had optimized the migration code until it broke so I locally replaced it with set state/clear state which we know works. So we identify a scheduled thread which could be replaced with the highest priority ready thread with the consideration of affinity. If so, then we set state migrating on that scheduled thread and then clear that state. This should (and does) result in the desired highest priority thread replacing in in the scheduled set. But the check is tripped when that scheduled/executing thread is outgoing thread on one core and incoming on another. I see this as two state changes within a single scheduler operation but that sounds the same as your double migration. We are likely replacing heir twice and tripping the same condition. If you want a test case of this, we can get our code cleaned up and push it. Gaisler will have to send their test code. -- Joel Sherrill, Ph.D. Director of Research & Development joel.sherr...@oarcorp.comOn-Line Applications Research Ask me about RTEMS: a free RTOS Huntsville AL 35805 Support Available(256) 722-9985 ___ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel
Re: PR2183 and affinity migration
On 06/27/2014 06:25 PM, Joel Sherrill wrote: I see this as two state changes within a single scheduler operation but that sounds the same as your double migration. We are likely replacing heir twice and tripping the same condition. If you want a test case of this, we can get our code cleaned up and push it. Gaisler will have to send their test code. I work currently an a solution for bug PR2183 since I hit it also in a new test case that works without the disabled thread dispatching trick. It should be ready next week. -- Sebastian Huber, embedded brains GmbH Address : Dornierstr. 4, D-82178 Puchheim, Germany Phone : +49 89 189 47 41-16 Fax : +49 89 189 47 41-09 E-Mail : sebastian.hu...@embedded-brains.de PGP : Public key available on request. Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG. ___ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel
Re: PR2183 and affinity migration
On Jun 27, 2014 3:04 PM, Sebastian Huber wrote: > > On 06/27/2014 06:25 PM, Joel Sherrill wrote: > > I see this as two state changes within a single scheduler > > operation but that sounds the same as your double > > migration. We are likely replacing heir twice and tripping > > the same condition. > > > > If you want a test case of this, we can get our code > > cleaned up and push it. Gaisler will have to send their > > test code. > > I work currently an a solution for bug PR2183 since I hit it also in a > new test case that works without the disabled thread dispatching trick. > It should be ready next week. Thanks. FYI Next week has a national holiday for us. In our you enjoyed the football game yesterday. > -- > Sebastian Huber, embedded brains GmbH > > Address : Dornierstr. 4, D-82178 Puchheim, Germany > Phone : +49 89 189 47 41-16 > Fax : +49 89 189 47 41-09 > E-Mail : sebastian.hu...@embedded-brains.de > PGP : Public key available on request. > > Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG. > > ___ > devel mailing list > devel@rtems.org > http://lists.rtems.org/mailman/listinfo/devel ___ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel