Thanks for the tip!
After adding the line you mentioned, the result gives me "CLEAR" even when I
ran with opt_level=0.
It was very helpful:)
---
[Visit
Topic](https://discuss.tvm.apache.org/t/tensorrt-seems-ctx-sync-does-not-work-while-using-tensorrt-on-jetson-xavier-nx/9579/7)
to resp
Thanks for your replies!
I checked the result with the code like below and it seems the results are the
same:
#tvm_trt_compare.py
...
mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
tgt = tvm.target.cuda()
ctx = tvm.gpu(0)
###Same Input
data
I'm trying to build android_rpc from the docker image given in
https://tvm.apache.org/docs/tutorials/frontend/deploy_model_on_android.html#sphx-glr-tutorials-frontend-deploy-model-on-android-py
I pulled libOpenCL.so from the android board and tried run
/apps/android_rpc/app/src/main/jni/build.