hey sorry for the delayed reply- in terms of the timeline we are trying to
upstream and open source the work around Q3 this year, and in the mean time I'm
going to be starting upstreaming TVM improvements. Generally we still could use
a good amount of training operator coverage (gradients, los
I also wish we could easily add hot pluggable **Relay** operators (whether for
testing, easily supporting additional ops, etc.). Unfortunately I believe the
main reason (or at least one major reason) this is currently not available is
because type relations (and basically all the type inferenc
Thanks, the batched inputs thing makes sense, I misunderstood! I also didn't
mean to imply that batch size itself is unnecessary- I do think it's a fairly
universal concept for data loading (except for perhaps dynamic models where the
input shape changes for each instance, but in this case you
perhaps we should rename the existing GraphExecutor?
https://github.com/apache/tvm/blob/main/python/tvm/relay/build_module.py#L366
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-rename-graphruntime-and-ilk-to-e-g-graphexecutor/9255/15)
to respond.
You are receiving this because y
Commenting to agree that I like the approach, and strongly believe this will be
useful (e.g. for reducing the boilerplate involved with setting up datasets for
TVM training, since common datasets already exist in PyTorch or TF). Also agree
with Tianqi about the NDArray/DLPack interfacing as we