I think that would be an interesting project, and something that is entirely
feasible.
But personally, since developing a frontend requires significant effort, we
have good support for PyTorch, and PyTorch is increasing adding JAX-inspired
feature, I'd rather improve our support for PyTorch.
Thanks Cody. It looks like the API will be:
```
trt_target = tvm.target.Target("tensorrt -use_fp16=True
-implicit_batch_mode=False")
mod = partition_for_tensorrt(mod, params=params, target=trt_target)
exe = vm.compile(mod, target=["cuda", trt_target], params=params)
```
(and similarly for
Google JAX model relay build, will TVM support it in the future?
---
[Visit Topic](https://discuss.tvm.apache.org/t/will-tvm-support-jax/12962/1) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.a
Hello, I had encountered the same problem (cv::dnn has not been declared). Have
you found a solution?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/solved-c-inference-test-app-doesnt-work-correctly/984/19)
to respond.
You are receiving this because you enabled mailing list mode.
To