I understand that TVM does not currently support deep learning network training.
However, TOPI and Relay seem to contain many layers necessary for training such
as **Batch Norm** and **DropOut**.
In the case of Batch Norm, does it affect the result of inference? Or is there
no problem with cal
Hi All,
I am trying to experiment with IRModule update function.
My goal is to update the IRModule with another relay function as follows, and
it is throwing an error.
Any comments/suggestions? Thanks in advance.
def module_update():
x0 = relay.var('x0', shape = (5,1))
hi @davide-giri,
there's been some work (see below) to optimize TVM-generated code on RISC-V. at
`main` today, there isn't anything specific to RISC-V checked-in, but i'm also
not aware of anything that would prevent you from running on RISC-V today.
could you provide some more clarification
I'm not quite clear about the extent, but I think the PR [Add µTVM Zephyr
support + QEMU regression test
#6603](https://github.com/apache/incubator-tvm/pull/6603) should be helpful for
evaluating uTVM on RISC-V.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/can-the-tvm-stack-target-
Did you mean LOG_BLOCK=4 or just BLOCK=4?
If LOG_BLOCK=4, that means BLOCK_IN and BLOCK_OUT would be 16. Therefore, in a
GEMM instruction, it would perform 16x16 fused-multiply-add (MAC), that is 256
MACs in a single GEMM instruction. In my calculation,
256 MACs * 0.142 GHz = 36.352 GOps
Not