Hi Jakub,
About the snippet below,
if (gomp_barrier_last_thread (state))
{
if (team->task_count == 0)
{
gomp_team_barrier_done (&team->barrier, state);
gomp_mutex_unlock (&team->task_lock);
gomp_team_barrier_wake (&team->barrier, 0);
return;
}
gomp_team_barrier_set_wa
Hi all,
Here is my second evaluation report, together with a simple program that
I was able to compile with my parallel version of GCC. Keep in mind that
I still have lots of concurrent issues inside the compiler and therefore
my branch will fail to compile pretty much anything else.
To reproduce
Hi,
Just submitted a WIP patch for my current status.
I've finished unifying the three queues and reducing the execution paths.
From now on, I will reduce the locked region so that in the end, only the queue
accesses are locked.
Once this is done splitting the queues and implementing work-stealing
Snapshot gcc-10-20190721 is now available on
ftp://gcc.gnu.org/pub/gcc/snapshots/10-20190721/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.
This snapshot has been generated from the GCC 10 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/trunk revision
Hi all,
Consider part of an example(figure 20) from doc P0190R4(
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2017/p0190r4.pdf) shown
below:
1. void thread1 (void)
2. {
3.int * volatile p;
4.p = rcu_dereference(gip);
5.if (p)
6.assert(*(p+p[0]) == 42);
7. }
The .gimple cod