I vote for A0 after careful consideration. DLDevice requires extra knowledge of
DLPack
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-rename-tvmcontext-to-tvmdevice/9090/25)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these email
cc @mbaret it would be great if you can help to review this PR
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/7506#issuecomment-790109490
Vote for A0 +1 to make it more modulized.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-rename-tvmcontext-to-tvmdevice/9090/24)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.or
I vote for A0 for consistency with DLDataType / tvm::DataType
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-rename-tvmcontext-to-tvmdevice/9090/23)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://d
i am wondering if there is any chance to introduce a quick way to compatible
with dynamic shapes?
as @cloudhan mentioned, TensorRT can let user set necessary input dimensions at
runtime, and auto compute other tensors' shape:
[Working With Dynamic
Shapes](https://docs.nvidia.com/deeplearning/te