[TVM Discuss] [Questions] CPU usage is very high on Android pad

2020-08-28 Thread Sining Sun via TVM Discuss
Hi all, Recently, I deployed my model to Android pad. My cpu is cortext-a53 and aarch64. My model is CNN-based and only has 900k parameter. When I run it on my pad, I used the top to observe the cpu usage. I found it is very high. Does anyone have ideas about this problem? --- [Visit

[TVM Discuss] [Questions] How to feed more than one input to the network when I run onnx model?

2020-06-01 Thread Sining Sun via TVM Discuss
Hi all, I am trying to run inference for onnx model. I have read the tutorial "[Compile ONNX Models](https://tvm.apache.org/docs/tutorials/frontend/from_onnx.html#sphx-glr-tutorials-frontend-from-onnx-py)", but in that tutorial, only one input is needed. `tvm_output = intrp.evaluate()(tvm.