This problem was solved by linking cuda and cuda_runtime libraries when I build
gotvm.
---
[Visit Topic](https://discuss.tvm.ai/t/gotvm-make-error-with-cuda/7765/2) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](h
Hi,
I have tried to implement gotvm as a GPU-accelerating deep learning runtime
system on my edge computing architecture.
When I make gotvm without cuda, there is no error.
However, when I uncomment the cuda-related lines in tvm_runtime_pack.cc, the
following errors are occurred.
$ make
The target device is jetson nano.
AutoTVM is used to derive best logs for conv2d layers on cuda backend with
sm_52 compute capability option.
---
[Visit
Topic](https://discuss.tvm.ai/t/yolov3-tiny-batch-input-test-failed/6796/5) to
respond.
You are receiving this because you enabled mai
Siju,
The problem is solved!!
When I run Yolov3-tiny on jetson nano, it takes about 35 ms for single image
inference.
Now, it takes about 120 ms for four image inference.
Greatly appreciate for your response.
---
[Visit
Topic](https://discuss.tvm.ai/t/yolov3-tiny-batch-input-test-faile
Hi,
I'm trying to inference "yolov3-tiny" model with input batch_size = 4.
The input shape was (4, 3, 416, 416).
However, the shape of the output is as follows:
module.get_output(0) --> (1, 255, 26, 26)
module.get_output(1) --> (1, 255, 13, 13)
IMHO, the problem has occurred when the foll