ivanradanov wrote:

> Maybe support for this operation could be just based on changes to how the 
> MLIR representation is built in the first place, what do you think?

This is partly what this implementation aims to do. In fact, after the pass 
that lowers the omp.workshare operation we are left with IR very close to the 
one you showed in your example.

The approach taken here is similar to the omp.workdistribute implementation, in 
that the purpose of the omp.workshare and omp.workshare.loop_wrapper ops are to 
preserve the high-level optimizations available when using HLFIR, after we are 
done with the LowerWorkshare pass, both omp.workdistribute and 
omp.workdistribute.loop_wrapper disappear.

The sole purpose of the omp.workdistribute.loop_wrapper op is to be able to 
more explicitly mark loops that need to "parallelized" by the workshare 
construct and preserve that information through the pipeline. Its lifetime is 
from the frontend (Fortran->{HLFIR,FIR}) up to the the LowerWorkshare pass 
which runs after we are done with HLFIR optimizations (after HLFIR->FIR 
lowering), same for omp.workshare.


https://github.com/llvm/llvm-project/pull/101445
_______________________________________________
llvm-branch-commits mailing list
llvm-branch-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-branch-commits

Reply via email to