The target device is jetson nano.
AutoTVM is used to derive best logs for conv2d layers on cuda backend with
sm_52 compute capability option.
---
[Visit
Topic](https://discuss.tvm.ai/t/yolov3-tiny-batch-input-test-failed/6796/5) to
respond.
You are receiving this because you enabled mai
@kitkat
What is target backend when you get this timing performance ?
Thanks
---
[Visit
Topic](https://discuss.tvm.ai/t/yolov3-tiny-batch-input-test-failed/6796/4) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](
Siju,
The problem is solved!!
When I run Yolov3-tiny on jetson nano, it takes about 35 ms for single image
inference.
Now, it takes about 120 ms for four image inference.
Greatly appreciate for your response.
---
[Visit
Topic](https://discuss.tvm.ai/t/yolov3-tiny-batch-input-test-faile
diff --git a/python/tvm/relay/frontend/darknet.py
b/python/tvm/relay/frontend/darknet.py
index 936d7c0dc..62a320780 100644
--- a/python/tvm/relay/frontend/darknet.py
+++ b/python/tvm/relay/frontend/darknet.py
@@ -637,12 +637,12 @@ class GraphProto(object):
at
Hi,
I'm trying to inference "yolov3-tiny" model with input batch_size = 4.
The input shape was (4, 3, 416, 416).
However, the shape of the output is as follows:
module.get_output(0) --> (1, 255, 26, 26)
module.get_output(1) --> (1, 255, 13, 13)
IMHO, the problem has occurred when the foll