I suspect this trigger a compiler bug on armel, as the code seem just fine with me. I have tested on armel and managed to get the code building with the following patch.
diff --git a/src/tbb/concurrent_monitor.h b/src/tbb/concurrent_monitor.h index 3d20ef5b..b010b2e5 100644 --- a/src/tbb/concurrent_monitor.h +++ b/src/tbb/concurrent_monitor.h @@ -23,6 +23,7 @@ #include "concurrent_monitor_mutex.h" #include "semaphore.h" +#include <assert.h> #include <atomic> namespace tbb { @@ -289,8 +290,10 @@ public: my_epoch.store(my_epoch.load(std::memory_order_relaxed) + 1, std::memory_order_relaxed); n = my_waitset.front(); if (n != end) { + wait_node<Context>* wn = to_wait_node(n); + assert(to_wait_node(n)); + wn->my_is_in_list.store(false, std::memory_order_relaxed); my_waitset.remove(*n); - to_wait_node(n)->my_is_in_list.store(false, std::memory_order_relaxed); } } I have no idea why this help. I am not yet sure if it is the reordering or the use of assert() improving the situation, as the round trip time is 2-3 hours on abel.debian.org, but will continue trying to reduce the change set a bit more. I do not know the code enough to know, but need to make sure reordering the .remove() and the .store() do not cause some race condition. I hope the use of my_mutex ensure it does not. -- Happy hacking Petter Reinholdtsen