Hello!
Currently, I am testing a simple convolution using TOPI. I experimented with the code below. import numpy as np import topi from tvm import relay from tvm.relay import testing import tvm from tvm.contrib import graph_runtime ## Setting Target and ctx target_mali_gpu = tvm.target.create('opencl -device=mali') target_host = 'llvm -target=aarch64-linux-gnu' ctx_mali_gpu = tvm.runtime.opencl(0) ## Setting Shape of Tensor input_size = (1,64,224,224) p1_size = (64,64,3,3) ## Make Placeholder input_data = tvm.te.placeholder( shape = input_size , dtype = "float32", name="Input" ) param1 = tvm.te.placeholder( shape= p1_size , dtype = "float32", name="P1" ) ## Build Module with tvm.target.mali(): conv = topi.mali.conv2d_nchw_spatial_pack( input_data ,param1 ,[1,1] ,[1,1,1,1] ,[1,1] ,"float32" ) sch = topi.mali.schedule_conv2d_nchw_spatial_pack([conv]) mod = tvm.build(sch, [input_data,param1] , target_mali_gpu , target_host) data = np.random.uniform(-1,1, size=input_size ).astype("float32") p1 = np.random.uniform(-1,1,size=p1_size ).astype("float32") tvm_input = tvm.nd.array( data , ctx_mali_gpu ) tvm_p1 = tvm.nd.array( p1, ctx_mali_gpu ) ## Running mod(tvm_input,tvm_p1) ctx_mali_gpu.sync() The problem is the following error when running the above code. Traceback (most recent call last): File "TEST.py", line 37, in <module> mod(tvm_input,tvm_p1) File "/home/firefly/Desktop/TVM/tvm/python/tvm/runtime/module.py", line 110, in __call__ return self.entry_func(*args) File "/home/firefly/Desktop/TVM/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 219, in __call__ raise get_last_ffi_error() tvm._ffi.base.TVMError: Traceback (most recent call last): [bt] (1) /home/firefly/Desktop/TVM/tvm/build/libtvm.so(TVMFuncCall+0x70) [0x7f8ac836c0] [bt] (0) /home/firefly/Desktop/TVM/tvm/build/libtvm.so(+0x1462080) [0x7f8ac95080] ^ vstore4(((float44)(((__global float*)P1)[_1.s0],((__global float*)P1)[_1.s1],((__global float*)P1)[_1.s2],((__global float*)P1)[_1.s3])), 0, (__global float*)kernel_vec + ((((int)get_group_id(0)) * 1024) + (((int)get_local_id(0)) * 4))); File "/home/firefly/Desktop/TVM/tvm/src/runtime/opencl/opencl_module.cc", line 234 File "/home/firefly/Desktop/TVM/tvm/src/runtime/library_module.cc", line 89 TVMError: Check failed: ret == 0 (-1 vs. 0) : OpenCL build error for device=0x7f897b07f0<source>:3:13: error: use of undeclared identifier 'float44' error: Compiler frontend failed (error code 59) terminate called after throwing an instance of 'dmlc::Error' what(): [17:00:06] /home/firefly/Desktop/TVM/tvm/src/runtime/workspace_pool.cc:115: Check failed: allocated_.size() == 1 (4 vs. 1) : Stack trace: [bt] (0) /home/firefly/Desktop/TVM/tvm/build/libtvm.so(tvm::runtime::WorkspacePool::Pool::Release(DLContext, tvm::runtime::DeviceAPI*)+0x4d0) [0x7f8acd3d08] [bt] (1) /home/firefly/Desktop/TVM/tvm/build/libtvm.so(tvm::runtime::WorkspacePool::~WorkspacePool()+0x48) [0x7f8acd26f8] [bt] (2) /home/firefly/Desktop/TVM/tvm/build/libtvm.so(tvm::runtime::cl::OpenCLThreadEntry::~OpenCLThreadEntry()+0x18) [0x7f8ad13650] [bt] (3) /lib/aarch64-linux-gnu/libc.so.6(__call_tls_dtors+0x48) [0x7f8f6c8620] Looking at the above error, it seems that an error occurs because the data type float44 is created when generating OpenCL code. I am not sure why I am getting this error. The old version of TVM works normally, but after updating, the problem occurs. Is it an internal TVM bug? Or did I make a mistake in the code? --- [Visit Topic](https://discuss.tvm.ai/t/error-when-compiling-using-opencl-error-compiler-frontend-failed-error-code-59/6503/1) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/5d5f676f35bbc770c0e6d5539b6fce8803de248d654c95d8ac64aedaced0961f).