Thanks for your fix. And I run the fixed code, the above error has been fixed~
For the input shape as (2,1,240), the schedule can be generated. But for the
data input shape as (3,1,240), there is another error : **Process finished with
exit code 136 (interrupted by signal 8: SIGFPE**.
```
---
Sure! BTW, If you set USE_LLVM to ON, by default it should show up if LLVM is
dynamically linked. A possible reason that it doesnt show up is that LLVM is
statically linked maybe
---
[Visit
Topic](https://discuss.tvm.apache.org/t/can-we-use-tvm-as-jit-compiler/8124/9)
to respond.
You ar
Yes, I don't know why it was not showing up when I was inspecting `libtvm.so`
through `ldd`.
Thanks for the help @junrushao1994!
---
[Visit
Topic](https://discuss.tvm.apache.org/t/can-we-use-tvm-as-jit-compiler/8124/8)
to respond.
You are receiving this because you enabled mailing list
You mean the name of LLVM shared library? It is something like libLLVM.so
---
[Visit
Topic](https://discuss.tvm.apache.org/t/can-we-use-tvm-as-jit-compiler/8124/7)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](ht
Hi @junrushao1994,
It might be that we already have the library available in our system. Could you
tell me exactly the name of the library?
Thanks,
Giuseppe
---
[Visit
Topic](https://discuss.tvm.apache.org/t/can-we-use-tvm-as-jit-compiler/8124/6)
to respond.
You are receiving this
I think we can somehow trim the TVM to remove graph runtime, vm, etc..However,
if you want to use TVM as a JIT compiler, you have to ship with LLVM build,
which is really large (>200MB IIRC)
---
[Visit
Topic](https://discuss.tvm.apache.org/t/can-we-use-tvm-as-jit-compiler/8124/5)
to resp
We do support network with DNNL in BYOC, but we mainly used it for BYOC
demonstration and didn't push its performance.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/dose-tvm-only-support-mkldnn-dense-operation/8160/4)
to respond.
You are receiving this because you enabled mailing li
Hi Guys,
I notice below code replace "." with "_" for variable's name in codegen phase.
What is purpose of this ?
[src/target/source/codegen_source_base.cc]
std::string CodeGenSourceBase::GetUniqueName(std::string prefix) {
for (size_t i = 0; i < prefix.size(); ++i) {
if (prefix[i] == '.
Hi @junrushao1994,
I was able to get something working at the end. It is actually very cool :)
My next question is: since I am only using the tensor language (no graph
runtime, no relay, no auto-tuner etc...) can we produce a libtvm.so that only
includes TE/TIR?
The problem is that libtvm.
Thanks for your reply~
The reason of using large graph is that the build time is too long for small
subgraph lstm computation function. The build time of the lstm computation
declaration as small subgraph will be more than 3723 s. Because the relay graph
for this computation declaration is so
HI,
if I want to compile to adreno GPU
its the same as mali?
or I need to do something else? thanks
---
[Visit Topic](https://discuss.tvm.apache.org/t/compile-to-adreno-gpu/8179/1) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails,
@tkonolige Thanks for replying!
I read the source code of tvm and find BYOC is exactly what's I need. TVM use
dnnl (a new mkldnn name) instead of mkldnn.
Howerver, is there a network benchmark to use dnnl compiler? I can only find
snippet network in tests/python/relay/test_external_codegen.py.
Thanks for reporting. This PR
(https://github.com/apache/incubator-tvm/pull/6683) fixed this bug.
However, typically for large graphs like this. We don't treat them as a single
search task. Treating it as a single large graph makes the search very
inefficient.
For large graphs, we use Relay
13 matches
Mail list logo