Hi all, when I try to use auto scheduler, a question comes into my mind: Is
cost model learning still necessary when I run all programs upon a fixed
hardward after learning for some time?
If the model still need to update, does it mean the model could not converge?
Then whether the accuracy o
i saw the comment in nn.softmax
This operator can be optimized away for inference
for now, the bert performance bottleneck is related with softmax.
what's the meaning of this comment,how to optimize away this op.
the ir may like below:
%1579 = fn (%p0218: Tensor[(128, 12, 128, 128), fl
Ref: `python/tvm/runtime/module.py:export_library`
You can specify the extra `options` when exporting the library, like:
`mod.export_library(file_name, options=["opt1", "opt2"])`
---
[Visit
Topic](https://discuss.tvm.apache.org/t/export-so-file-with-safety-complie-options/12162/2)
to res
Ok,Thank you very much :slightly_smiling_face:
---
[Visit
Topic](https://discuss.tvm.apache.org/t/whether-the-repo-of-https-github-com-tlc-pack-relax-can-emit-the-relay-ir-level-text/12151/5)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from th
Great I will look into that.
In the future it could be helpful to have an extensive list of the relay
operators that are generated for BYOC.
Thank you!
---
[Visit
Topic](https://discuss.tvm.apache.org/t/byoc-doc-list-of-operators-to-implement/12155/5)
to respond.
You are receiving this
Not everything under `tvm.relay` is an op. The page you referred is just the
APIs under this namespace. I would suggest looking at the model IR directly to
get the sense about which ops should be supported, or referring to other BYOC
integrations for their supported ops (e.g.,
https://tvm.apa
If there is a need we will do so, but so far we haven't seen a need.
:slight_smile:
---
[Visit
Topic](https://discuss.tvm.apache.org/t/whether-the-repo-of-https-github-com-tlc-pack-relax-can-emit-the-relay-ir-level-text/12151/4)
to respond.
You are receiving this because you enabled mail
Following the tutorial, I built the so file with tvm stack. But the so file
seems to be a debug version, as following
```
file updated.so
updated.so: ELF 64-bit LSB shared object, ARM aarch64, version 1 (SYSV),
dynamically linked, with debug_info, not stripped
```
is there any solution to a
That is really great, thank you for clarifying.
I was still interested in having a list of actual operators however:
The list of relay functions in
https://tvm.apache.org/docs/reference/api/python/relay/index.html contains
things like `setrecursionlimit` or `build` which I don't think are mean
I wonder is the architecture of VTA fixed? It seems that recent commits related
to hardware are all rewriting the VTA by Chisel which has the same architecture
with Xilinx HLS design. The last update of Xilinx version VTA is already 15
months ago. Is the ISA and architecture of VTA fixed now?
Thank you very much :grinning:, I have another question, there is already
provided interface “relay_translator.from_relay()" to convert relay_module to
relax_module, is there any similar function to convert relax_module to
relay_module in the future? Or it can be done in other ways?
---
11 matches
Mail list logo