@hogepodge @tqchen thanks for your advices. I have updated the PR.
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/25#issuecomment-917983427
> I wonder whether this would make the torch fallback op
> ([apache/tvm#7401](https://github.com/apache/tvm/pull/7401)) more or less
> useful (it would depend on what you (plan to) do with unsupported ops). I am
> still pondering whether to close it or dust it off.
@t-vi It will help a lot. If
> Is this supposed to be useful for accelerating PyTorch training?
>From my perspective, we are a little far from supporting PyTorch training. I
>can think of 2 possible ways:
1. use torchscript to train
2. use relay to train
It seems both ways are in early stage. Maybe we can further discuss th
This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and
use accelerated module in PyTorch.
Initial PR: https://github.com/apache/tvm/pull/8777
Discuss:
https://discuss.tvm.apache.org/t/rfc-pytorchtvm-compile-torchscript-to-tvm-and-use-accelerated-module-in-pytorch/10873
Y
# Background
PyTorch framework is increasingly being adopted for research and production. At
the same time, PyTorch lacks an effective inference acceleration toolchain,
which is the main concern in the industry. Existing acceleration includes:
1. PyTorch -> ONNX -> TensorRT/TVM
2. PyTorch ->
@tobegit3hub thanks for this great work. I am trying to export an autotuned
model with TVMDSOOp. Now, I am stuck at how to register the func_name to the
tf_op.
```
with tvm.transform.PassContext(opt_level=3):
graph, lib, params = relay.build_module.build(
mod, target=target, params=