Hi all,
Recently, I deployed my model to Android pad. My cpu is cortext-a53 and
aarch64. My model is CNN-based and only has 900k parameter. When I run it on my
pad, I used the top to observe the cpu usage. I found it is very high. Does
anyone have ideas about this problem?
---
[Visit
Hi all,
I am trying to run inference for onnx model. I have read the tutorial "[Compile
ONNX
Models](https://tvm.apache.org/docs/tutorials/frontend/from_onnx.html#sphx-glr-tutorials-frontend-from-onnx-py)",
but in that tutorial, only one input is needed.
`tvm_output = intrp.evaluate()(tvm.