Is it possibile to extend tf/pytorch to keep this information?
---
[Visit Topic](https://discuss.tvm.apache.org/t/hierarchy-in-tvm/12306/3) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.
[quote="hogepodge, post:1, topic:10305"]
What platforms are you using TVM for?
* [ ] X86 CPU
* [ ] ARM CPU
* [ ] Other CPU
* [ ] NVidia GPU
* [ ] AMD GPU
* [ ] Other GPU
* [ ] Embedded Platform
[/quote]
We are using TVM for DSA NPU, can you add one option, thanks!
---
[Visit Topic](https:/
I post an exmple for intrinsics choosing.
```
for (i, 0, 65535) {
C[i] = (A[i] + B[i])
}
```
```
Call Engine: veadd_mm
// normal ===stmt cost : 2061.94 (smallest cost) shape : 1x65535
[ tx.veadd_mm(tir.tvm_access_ptr(tir.type_annotation(), C, (int64)0,
(int64)65535, 2), tir.tvm_access_ptr(
All we need is **a target backend which can emit and optimize intrinsic ir**.
Let's take a look at what we've done in akg, which is a tensor compiler for
Davinci core based on tvm.

**Why we do this?**
1) NPU has more SIMD intrinsics
Hi,
ReduceNode only exisits in ScheduleOps for the tvm 0.6, is it still true in
latest tvm?
if it is true, I'm confused two passes after ScheduleOps have ReduceNode
logical.
Can anyone explain a little bit? Thanks in advance.
https://github.com/apache/incubator-tvm/blob/1831c17998b29f3797f364