[Apache TVM Discuss] [Questions] Generate multiple kernels for operators with symbolic shapes

2022-03-15 Thread leeexyz via Apache TVM Discuss
The community is working on the next-gen of Relay - Relax, which supports the Dynamic Shape. You can take a look. [Relax: Co-Designing High-Level Abstraction Towards TVM Unity - TVMCon 2021](https://www.tvmcon.org/events/relax-co-designing-high-level-abstraction-towards-tvm-unity/) --- [V

[Apache TVM Discuss] [Questions] Export so file with safety complie options

2022-02-24 Thread leeexyz via Apache TVM Discuss
Ref: `python/tvm/runtime/module.py:export_library` You can specify the extra `options` when exporting the library, like: `mod.export_library(file_name, options=["opt1", "opt2"])` --- [Visit Topic](https://discuss.tvm.apache.org/t/export-so-file-with-safety-complie-options/12162/2) to res

[Apache TVM Discuss] [Questions] Crash when opt_level = 0

2022-02-20 Thread leeexyz via Apache TVM Discuss
Not yet, it is not appropriate to modify the pass level cause it may impact other cases. I reported a bug and am still waiting for suggestions from the community. --- [Visit Topic](https://discuss.tvm.apache.org/t/crash-when-opt-level-0/12131/4) to respond. You are receiving this because

[Apache TVM Discuss] [Questions] Crash when opt_level = 0

2022-02-20 Thread leeexyz via Apache TVM Discuss
Hi @Haoyang I think it is as same as [[Bug][VM] Segmentation fault triggered if opt level set to 0 · Issue #10324 · apache/tvm (github.com)](https://github.com/apache/tvm/issues/10324) --- [Visit Topic](https://discuss.tvm.apache.org/t/crash-when-opt-level-0/12131/2) to respond. You are

[Apache TVM Discuss] [Questions] How to dump IRs for each pass when building a model

2021-11-16 Thread leeexyz via Apache TVM Discuss
Hi @donglinz, see the doc [Install from Source — tvm 0.8.dev0 documentation (apache.org)](https://tvm.apache.org/docs/install/from_source.html#install-from-source) ``` To debug with IRs, set(USE_RELAY_DEBUG ON) and set environment variable TVM_LOG_DEBUG. export TVM_LOG_DEBUG="ir/transform.cc=1

[Apache TVM Discuss] [Questions] Data type error while using tensorize

2021-03-25 Thread leeexyz via Apache TVM Discuss
I guess you declared elem_offset as a floating number, let's say 1.0 but not 1. :) --- [Visit Topic](https://discuss.tvm.apache.org/t/data-type-error-while-using-tensorize/9530/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails,

[Apache TVM Discuss] [Questions] If_scope in ir_builder

2021-03-08 Thread leeexyz via Apache TVM Discuss
Hi @SYangDong, use `b = tvm.tir.const(100, dtype="float32")` instead of the assgiment directly. --- [Visit Topic](https://discuss.tvm.apache.org/t/if-scope-in-ir-builder/9332/2) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [c

[Apache TVM Discuss] [Questions] [TE] Tensorize Elementwise Sum

2021-03-08 Thread leeexyz via Apache TVM Discuss
[quote="JosseVanDelm, post:1, topic:9335"] ``` Ab = tvm.tir.decl_buffer(a.shape, a.dtype, name="A", offset_factor=1, strides=[2,1]) Bb = tvm.tir.decl_buffer(b.shape, b.dtype, name="B", offset_factor=1, strides=[2,1]) Cb = tvm.tir.decl_buffer(c.shape, c.dtype, name="C", offset_factor

[Apache TVM Discuss] [Questions] Disable initialization in te.compute

2021-03-05 Thread leeexyz via Apache TVM Discuss
@cali I am not sure if there is a better way to achieve it. Maybe you can add a bool member **drop_init** in **CommReducerNode**. Once it is true you are safe to drop it in the MakeReduction function. --- [Visit Topic](https://discuss.tvm.apache.org/t/disable-initialization-in-te-compute/

[Apache TVM Discuss] [Questions] Disable initialization in te.compute

2021-03-03 Thread leeexyz via Apache TVM Discuss
@cali What is your goal to drop the init part in reduction? For compute operation, you cannot do that. You can check `compute_op.cc:MakeReduction`. --- [Visit Topic](https://discuss.tvm.apache.org/t/disable-initialization-in-te-compute/9252/2) to respond. You are receiving this because y

[Apache TVM Discuss] [Questions] Why need "passdown domain" after "InferRootBound"

2021-02-18 Thread leeexyz via Apache TVM Discuss
Hi Pei, IMO, after InferRootBound step, the root iter vars of the current producer stage may change, because all the consumers requested a different range of each dim. For example, here we split the axis of **z_global**. ``` import tvm from tvm import te n = 16 factor = 3 x = te.placeholde