I noticed the other day that the USM bullet point only talked about the 'unified_shared_memory' clause and not about the newer 'self_maps' clause. That's now fixed by the attached patch.
-> https://gcc.gnu.org/onlinedocs/libgomp/Offload-Target-Specifics.html Committed asr15-9506-g1ff4a22103767c Tobias
commit 1ff4a22103767cd0133f0c1db6e85220f28ab3fa Author: Tobias Burnus <tbur...@baylibre.com> Date: Tue Apr 15 23:19:50 2025 +0200 libgomp.texi (gcn, nvptx): Mention self_maps alongside USM libgomp/ChangeLog: * libgomp.texi (gcn, nvptx): Mention self_maps clause besides unified_shared_memory in the requirements item. --- libgomp/libgomp.texi | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/libgomp/libgomp.texi b/libgomp/libgomp.texi index 3d3a56cc29a..dfd189b646e 100644 --- a/libgomp/libgomp.texi +++ b/libgomp/libgomp.texi @@ -6888,7 +6888,7 @@ The implementation remark: @code{device(ancestor:1)}) are processed serially per @code{target} region such that the next reverse offload region is only executed after the previous one returned. -@item OpenMP code that has a @code{requires} directive with +@item OpenMP code that has a @code{requires} directive with @code{self_maps} or @code{unified_shared_memory} is only supported if all AMD GPUs have the @code{HSA_AMD_SYSTEM_INFO_SVM_ACCESSIBLE_BY_DEFAULT} property; for discrete GPUs, this may require setting the @code{HSA_XNACK} environment @@ -7045,7 +7045,7 @@ The implementation remark: Per device, reverse offload regions are processed serially such that the next reverse offload region is only executed after the previous one returned. -@item OpenMP code that has a @code{requires} directive with +@item OpenMP code that has a @code{requires} directive with @code{self_maps} or @code{unified_shared_memory} runs on nvptx devices if and only if all of those support the @code{pageableMemoryAccess} property;@footnote{ @uref{https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-requirements}}