ivanradanov wrote:

No you are right, sorry for the back and forth, as you said, since a wsloop can 
only be nested in a omp.parallel it is immediately obvious that it binds to the 
omp.parallel threads so that makes sense.

My only concern was that at some point some transformation (perhaps in the 
future, because I don't think anything transforms `wsloop`s currently) could 
make the assumption that all (or none) of the threads of the team an 
`omp.parallel` launches will execute the parent block of a `wsloop` that binds 
to that team.

I thought this was a fair assumption for an optimization/transformation to make 
because if for example only one of the threads executes a wsloop it would not 
produce the intended result. (for example it adds an operation immediately 
before the wsloop which is supposed to be executed by all threads in the 
omp.parallel. that operation would then be erroneously wrapped in an omp.single 
in LowerWorkshare.) So the intention was to guard against a potential error 
like that. Let me know if I am wrong here since I am sure people here have more 
experience than me on this. 

I can see that if no transformation can make that assumption, then it is 
perfectly safe to use `omp.wsloop` instead of `workdistribute.loop_wrapper`. I 
am fine with both ways and can make that change if you think it is better. (In 
fact that is what the initial version of this PR did. I decided to introduce 
the workshare.loop_wrapper later because I was concerned about a potential 
issue like above)

https://github.com/llvm/llvm-project/pull/101445
_______________________________________________
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits

Reply via email to