The community is working on the next-gen of Relay - Relax, which supports the
Dynamic Shape. You can take a look. [Relax: Co-Designing High-Level Abstraction
Towards TVM Unity - TVMCon
2021](https://www.tvmcon.org/events/relax-co-designing-high-level-abstraction-towards-tvm-unity/)
---
[V
Ref: `python/tvm/runtime/module.py:export_library`
You can specify the extra `options` when exporting the library, like:
`mod.export_library(file_name, options=["opt1", "opt2"])`
---
[Visit
Topic](https://discuss.tvm.apache.org/t/export-so-file-with-safety-complie-options/12162/2)
to res
Not yet, it is not appropriate to modify the pass level cause it may impact
other cases. I reported a bug and am still waiting for suggestions from the
community.
---
[Visit Topic](https://discuss.tvm.apache.org/t/crash-when-opt-level-0/12131/4)
to respond.
You are receiving this because
Hi @Haoyang I think it is as same as [[Bug][VM] Segmentation fault triggered if
opt level set to 0 · Issue #10324 · apache/tvm
(github.com)](https://github.com/apache/tvm/issues/10324)
---
[Visit Topic](https://discuss.tvm.apache.org/t/crash-when-opt-level-0/12131/2)
to respond.
You are
Hi @donglinz, see the doc [Install from Source — tvm 0.8.dev0 documentation
(apache.org)](https://tvm.apache.org/docs/install/from_source.html#install-from-source)
```
To debug with IRs, set(USE_RELAY_DEBUG ON) and set environment variable
TVM_LOG_DEBUG.
export TVM_LOG_DEBUG="ir/transform.cc=1
I guess you declared elem_offset as a floating number, let's say 1.0 but not 1.
:)
---
[Visit
Topic](https://discuss.tvm.apache.org/t/data-type-error-while-using-tensorize/9530/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails,
Hi @SYangDong, use `b = tvm.tir.const(100, dtype="float32")` instead of the
assgiment directly.
---
[Visit Topic](https://discuss.tvm.apache.org/t/if-scope-in-ir-builder/9332/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [c
[quote="JosseVanDelm, post:1, topic:9335"]
```
Ab = tvm.tir.decl_buffer(a.shape, a.dtype, name="A", offset_factor=1,
strides=[2,1])
Bb = tvm.tir.decl_buffer(b.shape, b.dtype, name="B", offset_factor=1,
strides=[2,1])
Cb = tvm.tir.decl_buffer(c.shape, c.dtype, name="C", offset_factor
@cali I am not sure if there is a better way to achieve it. Maybe you can add a
bool member **drop_init** in **CommReducerNode**. Once it is true you are safe
to drop it in the MakeReduction function.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/disable-initialization-in-te-compute/
@cali What is your goal to drop the init part in reduction? For compute
operation, you cannot do that. You can check `compute_op.cc:MakeReduction`.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/disable-initialization-in-te-compute/9252/2)
to respond.
You are receiving this because y
Hi Pei,
IMO, after InferRootBound step, the root iter vars of the current producer
stage may change, because all the consumers requested a different range of each
dim.
For example, here we split the axis of **z_global**.
```
import tvm
from tvm import te
n = 16
factor = 3
x = te.placeholde
11 matches
Mail list logo