For the Pi2 SMP: I started looking at this a while ago. If I remember correctly, Pi2 SMP worked on QEMU. I have an original Pi2 that it fails on (using smp01.exe). I dont have a really good way to debug on the hardware, so I was going down the path of using printk statements to figure out where it was hanging. The console log below shows how far I was able to get. It looks like it's in _CPU_SMP_Processor_event_receive waiting for the 2nd CPU to change state. If anyone has clues of where to look next, I can try a few changes out.
Alan RTEMS RPi 2B 1.1 (1GB) [00a21041] in _SMP_Handler_initialize in _SMP_Handler_initialize, max processors = 4 in _SMP_Handler_initialize - calling _CPU_SMP_Initialize in _CPU_SMP_Initialize in _SMP_Handler_initialize - _SMP_Processor_maximum = 4 in _SMP_Handler_initialize - calling _SMP_Start_processors In _SMP_Start_processors In _SMP_Start_processors - index = 0 In _SMP_Start_processors - cpu_index = self, so started already In _SMP_Start_processors - index = 1 In _SMP_Start_processors - calling _CPU_SMP_Start_processor in _CPU_SMP_Start_processor : cpu_index = 1 in _CPU_SMP_Start_processor - call _Per_CPU_State_wait_for_non_initial_state - bool started = 0 in Per_CPU_State_wait_for_non_initial_state index = 1, timeout = 0 CPU state before = 0 in Per_CPU_State_wait_for_non_initial_state - about to call _CPU_SMP_Processor_event_receive in while loop in Per_CPU_State_wait_for_non_initial_state - after _CPU_SMP_Processor_event_receive in while loop CPU state after = 0 in Per_CPU_State_wait_for_non_initial_state - about to call _CPU_SMP_Processor_event_receive in while loop On Sun, Jun 21, 2020 at 7:09 AM Chris Johns <chr...@rtems.org> wrote: > > > On 21 Jun 2020, at 12:48 am, Joel Sherrill <j...@rtems.org> wrote: > > Hi > > The m2006-2 candidate passed more of the build sweep steps than any of the > other candidates. > > > Great. I will branch the repos tomorrow. > > Thank you for all your testing and reports. They are really helpful and > important. > > The bsp builder sweep of all BSPs and many (1700+) configurations has the > normal 6 GCC induced epiphany failures. All but one of the BSP bsets built. > atsamv failed: > > https://lists.rtems.org/pipermail/build/2020-June/015864.html > > Looks like libbsd failed to build for that BSP with this: > > =========================== > [1875/1925] Linking build/arm-rtems5-atsamv-default/epoch01.exe > /home/joel/rtems-cron-5.0.0-m2006-2/rtems-source-builder-5.0.0-m2006-2/rtems/build/tmp/sb-1001-staging/bin/../lib/gcc/arm-rtems5/7.5.0/../../../../arm-rtems5/bin/ld:linkcmds.base:326 > cannot move location counter backwards (from 000000002047ab60 to > 000000002045f000) > collect2: error: ld returned 1 exit status > > /home/joel/rtems-cron-5.0.0-m2006-2/rtems-source-builder-5.0.0-m2006-2/rtems/build/tmp/sb-1001-staging/bin/../lib/gcc/arm-rtems5/7.5.0/../../../../arm-rtems5/bin/ld:linkcmds.base:326 > cannot move location counter backwards (from 000000002053f9a0 to > 000000002045f000) > collect2: error: ld returned 1 exit status > > /home/joel/rtems-cron-5.0.0-m2006-2/rtems-source-builder-5.0.0-m2006-2/rtems/build/tmp/sb-1001-staging/bin/../lib/gcc/arm-rtems5/7.5.0/../../../../arm-rtems5/bin/ld:linkcmds.base:326 > cannot move location counter backwards (from 000000002053f9a0 to > 000000002045f000) > collect2: error: ld returned 1 exit status > > Waf: Leaving directory > `/home/joel/rtems-cron-5.0.0-m2006-2/rtems-source-builder-5.0.0-m2006-2/rtems/build/rtems-libbsd-d38dbbe18e5315bf69a7c3916d71ef3838d4c20d-x86_64-linux-gnu-1/rtems-libbsd-5.0.0-m2006-2/build/arm-rtems5-atsamv-default' > Build failed > =========================== > > No one may read this far but this failure and Jan's Pi2 testing failure > appear to be the hurdles now. > > > That is a shame. Fixes are welcome. > > Chris > _______________________________________________ > devel mailing list > devel@rtems.org > http://lists.rtems.org/mailman/listinfo/devel _______________________________________________ devel mailing list devel@rtems.org http://lists.rtems.org/mailman/listinfo/devel