Well-received with many thanks!
---
[Visit
Topic](https://discuss.tvm.apache.org/t/ir-level-or-hierarchical-relationship/7869/7)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/email/
Relay is used to do "graph" (or full program level optimization) and TIR is
used to do imperative loop optimization for (mostly) dense linear algebra
kernels.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/ir-level-or-hierarchical-relationship/7869/6)
to respond.
You are receiving t
I think someone just needs to expose this in Python you could probably do so
with
```
TVM_REGISTER_GLOBAL("SimplifyStmt").set_body_typed([](tir::Stmt stmt) {
arith::Analyzer analyzer;
return arith::StmtSimplifier(&analyzer).Simplify(stmt);
});
```
and in Python
```
tvm.get_global_fun
You are using torch.jit.script. Please try torch.jit.trace.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/import-rnn-t-pytorch-model-into-tvm/7874/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://dis
Thanks for your reply. I see TVM's C++ API now, I'll try.
---
[Visit Topic](https://discuss.tvm.apache.org/t/profiling-tvm-module/7870/5) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.or
hmmm why do we need to profile python? It is slow anyways
---
[Visit Topic](https://discuss.tvm.apache.org/t/profiling-tvm-module/7870/4) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.or
I see, but perf can only give a summary of statistics, which include all the
program and thereby many noise for profling a Module. Seems there are no tools
that we can inspect the cache behavior just for a python code region.
---
[Visit Topic](https://discuss.tvm.apache.org/t/profiling-tvm
We tried to import RNN-T pytorch model
https://github.com/mlperf/inference/tree/master/v0.7/speech_recognition/rnnt/pytorch
into TVM.
Pre-trained RNN-T model for MLPerf Inference https://zenodo.org/record/3662521
We found the error:
NotImplementedError: The following operators are not imple
Thanks! It seems the TVM has two level IR, Relay IR(higher pass, replacing
nnvm) and TIR(lower pass, replacing Hailde IR). Is that right?
Thanks again!
---
[Visit
Topic](https://discuss.tvm.apache.org/t/ir-level-or-hierarchical-relationship/7869/5)
to respond.
You are receiving this beca
Thanks for your reply!
---
[Visit
Topic](https://discuss.tvm.apache.org/t/ir-level-or-hierarchical-relationship/7869/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/email/unsubscri
I was wondering if there is strong need to in-place mutation. Introducing
in-place operator is a bit troublesome and the gain is usually little (because
most of those operations can be inlined)
---
[Visit
Topic](https://discuss.tvm.apache.org/t/supporting-in-place-operations/7871/3)
to r
It looks like linux pref tools: https://perf.wiki.kernel.org/index.php/Main_Page
---
[Visit Topic](https://discuss.tvm.apache.org/t/profiling-tvm-module/7870/2) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https:
Basically Relay => TE => TIR. NNVM is deprecated, and HalideIR is no longer
used.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/ir-level-or-hierarchical-relationship/7869/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails,
please see http://tvm.apache.org/docs/dev/index.html
---
[Visit
Topic](https://discuss.tvm.apache.org/t/ir-level-or-hierarchical-relationship/7869/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.
I was [looking for something like this a couple of months
back](https://discuss.tvm.apache.org/t/reshape-in-place-using-relay/6856), but
to avail.
It would be useful to have, I'm just unsure what changes would be needed. In a
sense we have in-place operations when we fuse conv2d+relu layers
I am fed up with this stupid unstable API.
Nothing works. Fuck you TVM !
---
[Visit
Topic](https://discuss.tvm.apache.org/t/resnet50-based-one-stage-detector-model-conversion-gets-hung-up/2373/7)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe fro
For a number of use cases in TVM, it would be valuable to support in-place
operators. One such operator would be strided_set, which currently takes two
tensors, a primary tensor and a subtensor, and writes the subtensor into the
primary tensor at a particular offset. It doesn't do this in-plac
Hey, I want to profile the TVM module, e.g: cache-misses, LL-CACHE-MISSES etc.
How can I do this?
---
[Visit Topic](https://discuss.tvm.apache.org/t/profiling-tvm-module/7870/1) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [c
Hi all,
I'm pretty confused about the relationship between the Halide IR, TIR, Relay IR
and NNVM. What's the difference between them?
At present, which kind of IR is used in the computation Graph of the tvm? And
which kind of IR is used when lowering? Maybe both of them are using Relay IR?
T
19 matches
Mail list logo