@Hzfengsy @spectrometerHBH I'd be interested to hear your thoughts on this as I
imagine it could have some overlap with the work you're doing on TensorIR.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-refactor-the-compile-engine-to-expose-a-relay-te-translator/8417/5)
to respond.
Yeah. In most cases we can do vectorize in TIR instead of relying on llvm.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/role-of-the-llvm-autovectorizer-in-tvm/8388/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
Could you print out the lowered code? You can use `tvm.lower(s, args)` where
`s` is the schedule. Also, if you provide a minimal example to run, I can take
a look at it.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/quantization-and-3d-convolution/8338/5)
to respond.
You are receiv
Dear Community:
On behalf of the organizing committee. We are excited to announce that the
registration of Apache TVM conference https://tvmconf.org/ is now open
You can click the above link for the registration.
TQ
Another requirement I have for the general TE translator is to support an
arbitrary Relay function, including the Relay function with more than one
reduce op (e.g., conv2d). The current compile engine doesn't allow this pattern
because it selects one schedule implementation per Relay function,
In this PR https://github.com/apache/incubator-tvm/pull/6885, node name is
stored in Span.SourceName.
Since Span.SourceName is supposed to represent name of source file, I'd suggest
to add another field `hint` to Span to represent node or layer name which is
common in models.
And when model