[clang] 15a1769 - Emit OpenCL metadata when targeting SPIR-V

2022-04-05 Thread Shangwu Yao via cfe-commits

Author: Shangwu Yao
Date: 2022-04-05T20:58:32Z
New Revision: 15a1769631ff0b2b3e830b03e51ae5f54f08a0ab

URL: 
https://github.com/llvm/llvm-project/commit/15a1769631ff0b2b3e830b03e51ae5f54f08a0ab
DIFF: 
https://github.com/llvm/llvm-project/commit/15a1769631ff0b2b3e830b03e51ae5f54f08a0ab.diff

LOG: Emit OpenCL metadata when targeting SPIR-V

This is required for converting function calls such as get_global_id()
into SPIR-V builtins.

Differential Revision: https://reviews.llvm.org/D123049

Added: 


Modified: 
clang/lib/CodeGen/CodeGenModule.cpp
clang/lib/Frontend/CompilerInvocation.cpp
clang/test/CodeGenCUDASPIRV/kernel-cc.cu

Removed: 




diff  --git a/clang/lib/CodeGen/CodeGenModule.cpp 
b/clang/lib/CodeGen/CodeGenModule.cpp
index ddcf564e688fe..5536626d0691a 100644
--- a/clang/lib/CodeGen/CodeGenModule.cpp
+++ b/clang/lib/CodeGen/CodeGenModule.cpp
@@ -784,7 +784,7 @@ void CodeGenModule::Release() {
   LangOpts.OpenMP);
 
   // Emit OpenCL specific module metadata: OpenCL/SPIR version.
-  if (LangOpts.OpenCL) {
+  if (LangOpts.OpenCL || (LangOpts.CUDAIsDevice && getTriple().isSPIRV())) {
 EmitOpenCLMetadata();
 // Emit SPIR version.
 if (getTriple().isSPIR()) {

diff  --git a/clang/lib/Frontend/CompilerInvocation.cpp 
b/clang/lib/Frontend/CompilerInvocation.cpp
index 91adacdee3ad7..f586f8d64a7ac 100644
--- a/clang/lib/Frontend/CompilerInvocation.cpp
+++ b/clang/lib/Frontend/CompilerInvocation.cpp
@@ -3328,6 +3328,10 @@ void CompilerInvocation::setLangDefaults(LangOptions 
&Opts, InputKind IK,
 // whereas respecting contract flag in backend.
 Opts.setDefaultFPContractMode(LangOptions::FPM_FastHonorPragmas);
   } else if (Opts.CUDA) {
+if (T.isSPIRV()) {
+  // Emit OpenCL version metadata in LLVM IR when targeting SPIR-V.
+  Opts.OpenCLVersion = 200;
+}
 // Allow fuse across statements disregarding pragmas.
 Opts.setDefaultFPContractMode(LangOptions::FPM_Fast);
   }

diff  --git a/clang/test/CodeGenCUDASPIRV/kernel-cc.cu 
b/clang/test/CodeGenCUDASPIRV/kernel-cc.cu
index 1ba906ebc90d7..9e575d232b34d 100644
--- a/clang/test/CodeGenCUDASPIRV/kernel-cc.cu
+++ b/clang/test/CodeGenCUDASPIRV/kernel-cc.cu
@@ -7,3 +7,6 @@
 // CHECK: define spir_kernel void @_Z6kernelv()
 
 __attribute__((global)) void kernel() { return; }
+
+// CHECK: !opencl.ocl.version = !{[[OCL:![0-9]+]]}
+// CHECK: [[OCL]] = !{i32 2, i32 0}



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] 31d8dbd - [CUDA/SPIR-V] Force passing aggregate type byval

2022-07-22 Thread Shangwu Yao via cfe-commits

Author: Shangwu Yao
Date: 2022-07-22T20:30:15Z
New Revision: 31d8dbd1e5b4ee0fd04bfeb3a64d8f9f33260905

URL: 
https://github.com/llvm/llvm-project/commit/31d8dbd1e5b4ee0fd04bfeb3a64d8f9f33260905
DIFF: 
https://github.com/llvm/llvm-project/commit/31d8dbd1e5b4ee0fd04bfeb3a64d8f9f33260905.diff

LOG: [CUDA/SPIR-V] Force passing aggregate type byval

This patch forces copying aggregate type in kernel arguments by value when
compiling CUDA targeting SPIR-V. The original behavior is not passing by value
when there is any of destructor, copy constructor and move constructor defined
by user. This patch makes the behavior of SPIR-V generated from CUDA follow
the CUDA spec
(https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#global-function-argument-processing),
and matches the NVPTX
implementation (
https://github.com/llvm/llvm-project/blob/41958f76d8a2c47484fa176cba1de565cfe84de7/clang/lib/CodeGen/TargetInfo.cpp#L7241).

Differential Revision: https://reviews.llvm.org/D130387

Added: 
clang/test/CodeGenCUDASPIRV/copy-aggregate-byval.cu

Modified: 
clang/lib/CodeGen/TargetInfo.cpp

Removed: 




diff  --git a/clang/lib/CodeGen/TargetInfo.cpp 
b/clang/lib/CodeGen/TargetInfo.cpp
index e8ee5533104ca..fc0952e68a667 100644
--- a/clang/lib/CodeGen/TargetInfo.cpp
+++ b/clang/lib/CodeGen/TargetInfo.cpp
@@ -10449,6 +10449,15 @@ ABIArgInfo 
SPIRVABIInfo::classifyKernelArgumentType(QualType Ty) const {
   LTy = llvm::PointerType::getWithSamePointeeType(PtrTy, GlobalAS);
   return ABIArgInfo::getDirect(LTy, 0, nullptr, false);
 }
+
+// Force copying aggregate type in kernel arguments by value when
+// compiling CUDA targeting SPIR-V. This is required for the object
+// copied to be valid on the device.
+// This behavior follows the CUDA spec
+// 
https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#global-function-argument-processing,
+// and matches the NVPTX implementation.
+if (isAggregateTypeForABI(Ty))
+  return getNaturalAlignIndirect(Ty, /* byval */ true);
   }
   return classifyArgumentType(Ty);
 }

diff  --git a/clang/test/CodeGenCUDASPIRV/copy-aggregate-byval.cu 
b/clang/test/CodeGenCUDASPIRV/copy-aggregate-byval.cu
new file mode 100644
index 0..bceca4d4ee5d6
--- /dev/null
+++ b/clang/test/CodeGenCUDASPIRV/copy-aggregate-byval.cu
@@ -0,0 +1,25 @@
+// Tests CUDA kernel arguments get copied by value when targeting SPIR-V, even 
with
+// destructor, copy constructor or move constructor defined by user.
+
+// RUN: %clang -Xclang -no-opaque-pointers -emit-llvm --cuda-device-only 
--offload=spirv32 \
+// RUN:   -nocudalib -nocudainc %s -o %t.bc -c 2>&1
+// RUN: llvm-dis %t.bc -o %t.ll
+// RUN: FileCheck %s --input-file=%t.ll
+
+// RUN: %clang -Xclang -no-opaque-pointers -emit-llvm --cuda-device-only 
--offload=spirv64 \
+// RUN:   -nocudalib -nocudainc %s -o %t.bc -c 2>&1
+// RUN: llvm-dis %t.bc -o %t.ll
+// RUN: FileCheck %s --input-file=%t.ll
+
+class GpuData {
+ public:
+  __attribute__((host)) __attribute__((device)) GpuData(int* src) {}
+  __attribute__((host)) __attribute__((device)) ~GpuData() {}
+  __attribute__((host)) __attribute__((device)) GpuData(const GpuData& other) 
{}
+  __attribute__((host)) __attribute__((device)) GpuData(GpuData&& other) {}
+};
+
+// CHECK: define
+// CHECK-SAME: spir_kernel void @_Z6kernel7GpuData(%class.GpuData* noundef 
byval(%class.GpuData) align
+
+__attribute__((global)) void kernel(GpuData output) {}



___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [CudaSPIRV] Add support for optional spir-v attributes (PR #116589)

2024-11-19 Thread Shangwu Yao via cfe-commits

https://github.com/ShangwuYao approved this pull request.

Looks great! Thanks Alexander!

https://github.com/llvm/llvm-project/pull/116589
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [clang-tools-extra] [CudaSPIRV] Allow using integral non-type template parameters as attribute args (PR #131546)

2025-03-17 Thread Shangwu Yao via cfe-commits


@@ -8,9 +8,23 @@
 __attribute__((reqd_work_group_size(128, 1, 1)))
 __global__ void reqd_work_group_size_128_1_1() {}
 
+template 
+__attribute__((reqd_work_group_size(a, b, c)))
+__global__ void reqd_work_group_size_a_b_c() {}
+
+template <>
+__global__ void reqd_work_group_size_a_b_c<128,1,1>(void);
+
 __attribute__((work_group_size_hint(2, 2, 2)))
 __global__ void work_group_size_hint_2_2_2() {}
 
+template 
+__attribute__((work_group_size_hint(a, b, c)))
+__global__ void work_group_size_hint_a_b_c() {}
+
+template <>
+__global__ void work_group_size_hint_a_b_c<128,1,1>(void);

ShangwuYao wrote:

Should we check the correct metadata is added?

https://github.com/llvm/llvm-project/pull/131546
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [clang][opencl] Allow passing all zeros to reqd_work_group_size (PR #131543)

2025-03-16 Thread Shangwu Yao via cfe-commits

https://github.com/ShangwuYao approved this pull request.

Cool!

https://github.com/llvm/llvm-project/pull/131543
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [clang-tools-extra] [CudaSPIRV] Allow using integral non-type template parameters as attribute args (PR #131546)

2025-03-17 Thread Shangwu Yao via cfe-commits


@@ -812,6 +838,12 @@ void Sema::InstantiateAttrs(const 
MultiLevelTemplateArgumentList &TemplateArgs,
   continue;
 }
 
+if (const auto *ReqdWorkGroupSize =
+dyn_cast(TmplAttr)) {
+  instantiateDependentReqdWorkGroupSizeAttr(*this, TemplateArgs,

ShangwuYao wrote:

Are there still checks at this point to see if the XDim, YDim and ZDim are 
valid?

https://github.com/llvm/llvm-project/pull/131546
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [clang-tools-extra] [CudaSPIRV] Allow using integral non-type template parameters as attribute args (PR #131546)

2025-03-17 Thread Shangwu Yao via cfe-commits

https://github.com/ShangwuYao approved this pull request.

Pretty neat!!

https://github.com/llvm/llvm-project/pull/131546
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [llvm] [clang][OpenMP][SPIR-V] Fix addrspace of global constants (PR #134399)

2025-04-20 Thread Shangwu Yao via cfe-commits

ShangwuYao wrote:

This test reproduces the issue above:

```
// RUN: %clang_cc1 -fcuda-is-device -triple spirv32 -o - -emit-llvm -x cuda %s  
| FileCheck %s
// RUN: %clang_cc1 -fcuda-is-device -triple spirv64 -o - -emit-llvm -x cuda %s  
| FileCheck %s

// CHECK: @.str = private unnamed_addr addrspace(4) constant [13 x i8] c"Hello 
World\0A\00", align 1 

extern "C" __attribute__((device)) int printf(const char* format, ...);

__attribute__((global)) void printf_kernel() {
  printf("Hello World\n");
}
```

Could you also add the test case as test/CodeGenCUDASPIRV/printf.cu or 
something? Thanks!!


https://github.com/llvm/llvm-project/pull/134399
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits