It does mean that the LLVM is not enabled. The particular global function
"target.build.llvm" is defined in the file "src/target/llvm/llvm_module.cc".
Would you like to check if this file is included?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/cpptest-build-module-test-cc-check-fa

I find that when I create this PackedFunc* object, I can't find
"target.build.llvm"

**this Manager* m has not "target.build.llvm" attrs inside!**
---
[Visit
Topic](https
**When I run Gtest file build_module_test.cc ,I encounter this:**
Running main() from /home/qwe123/googletest/googletest/src/gtest_main.cc
unknown file: Failure
C++ exception with description "[11:59:42]
/home/qwe123/TVMsrc/src/target/codegen.cc:60:
This is a bit outwith my area of experience with TVM, however I do recall
seeing TVM had WebGL support, as discussed in [this 2018
blogpost](https://tvm.apache.org/2018/03/12/webgl).
However, [this forum discussion in 2020 discussed deprecating in favour of
WebGPU](https://discuss.tvm.apache.
You may want to make this message a reply to your [original
thread](https://discuss.tvm.apache.org/t/what-if-the-result-is-not-correct/11858/5)
to make things more coherent for other forum users.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-to-compare-layers-with-origin-network/
cause tvm leave no origin network layers infomation in the tvm graph, so how
can i use dump data compared with orgin network layers?
for exmaple, the bert-large has 2000+ ops, but which op related to origin layer
is hard to figure out.
when you face accuracy problem, you dump the data and com
Thanks for the reply.
* PyTorch -> Relay -> Ansor -> TVM's low-level code -> LLVM/NVCC (LLVM was used
above)
* Both CPU and GPU (in particular, NVIDIA T4)
---
[Visit Topic](https://discuss.tvm.apache.org/t/quantized-transformer/11850/3)
to respond.
You are receiving this because you enab