Thanks for your quick reply.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/generate-multiple-kernels-for-operators-with-symbolic-shapes/12313/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.
Hey everyone,
I am currently working on a project about dynamic shape. I found that the goal
of OpStrategy is to enable TVM to generate multiple kernels for operators with
symbolic shapes. But i notice that there is not any update about this feature.
So how do I need to modify the te_compile
Yes, TIR do support it. But you have to change your code a little bit.
```
A = te.var('A')
B = te.var('B')
callee = tir.PrimFunc([A, B], tir.Evaluate(tir.Add(A, B)))
callee = callee.with_attr('global_symbol', 'callee')
main = tir.PrimFunc([A, B], tir.Evaluate(tir.Call('int32', callee, [A, B]))
Hi there, I try to use the tir.script to implement my customized operator.
The code genergation is correct when I print the deivce code. However, It raise
the kernel launch error. So I try to print the IR. what I found is as below:
`tir.tvm_call_packer("main_kernel0", A, B, C, 1, dtype="
Thank you @leeexyz. Yeah, we can use ```tvm.tir.const``` or a new buffer. I
means, it there any mechanism to prevent users using python variables within a
```if_scope```. For example, error message to tell users to utilize
```tvm.tir.const``` since it's quite easy to confuse the python varia
Hi, I'm using the ir_builder to contrust a cuda kernel, but I encounter a
problem of if_scope
```
ib = tvm.tir.ir_builder.create()
n = te.size_var("n")
A = ib.pointer("float32", name="A")
tmod = tvm.tir.truncmod
with ib.for_range(0, n, name="i") as i:
with ib.if_scope(tm