> However, when setting `OMP_NUM_THREADS=1` the model inference time is same,
> seems it’s a problem with multiple threads.
Will it be possible that there's any thread realated limitation in your pytorch
script?
---
[Visit
Topic](https://discuss.tvm.ai/t/performance-of-same-op-and-work
In design, TVM requires users to manually choose their target device. The
`TVMContext` object has two structure members that are `device_type` and
`device_id`.
If not specified, it's natural to choose the first one in default.
---
[Visit
Topic](https://discuss.tvm.ai/t/how-does-tvm-choos
In `src/runtime/opencl/opencl_device_api.cc`, we can find some runtime
functions that check the device information.
---
[Visit
Topic](https://discuss.tvm.ai/t/how-does-tvm-choose-the-platform-and-device/7695/2)
to respond.
You are receiving this because you enabled mailing list mode.
To