Hi, I'm using AutoTVM for tuning my kernels, however I still see that the tuned
results have further optimization opportunities. For example, a tensor with 2
continuous topi.reshape, the generated code would not eliminate the previous
one. I'm wondering is there an option to enable IR optimiza
Hello. I am new to VTA and I want to use VTA for compiling a NN model (such as
Keras) to convert the code into HLS or Verilog and synthesis with Vivado. I
have read the documents but I did not find anything useful. Can you please help
me how can I do this? I really appreciate it. Thank!
-
Hi,
I am following the BYOC example for the C codegen, I have a few questions:
1. During testing, do I need to recompile the whole TVM everytime I make a
modification to the codegen? (provided I added the flag for my codegen in the
tvm cmake file)
2. Once I compiled tvm with my C codegen, how
Hello!!
I am new to working with compilers, while installing TVM and Cmake, I could not
find Cmake/Config.cmake, instead there is CTestConfig.cmake. I would like to
know if both are the same and if so, should I add this line " export
TVM_LOG_DEBUG="ir/transform.cc=1;relay/ir/transform.cc=1""
Can we get an update on how this should be done?
There is an example in the doc [reading model from a
file](https://tvm.apache.org/docs/how_to/compile_models/from_tensorflow.html#sphx-glr-how-to-compile-models-from-tensorflow-py)
but I would like to know the way of doing it with a model object
A error occurred when I run a code snippet which is a small *self-attenion*
implementation.
I also disable 'AlterOpLayout' pass,it doesn't work. Is my code wrong?How to
modify it.
Here is my source code
```
from tvm import relay
from tvm.relay import testing
import tvm
from tvm import te
from t