Re: [apache/tvm-rfcs] [RFC]PyTorchTVM (#25)

2021-09-13 Thread Meteorix
@hogepodge @tqchen thanks for your advices. I have updated the PR. -- You are receiving this because you commented. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/25#issuecomment-917983427

Re: [apache/tvm-rfcs] [RFC]PyTorchTVM (#25)

2021-08-29 Thread Meteorix
> I wonder whether this would make the torch fallback op > ([apache/tvm#7401](https://github.com/apache/tvm/pull/7401)) more or less > useful (it would depend on what you (plan to) do with unsupported ops). I am > still pondering whether to close it or dust it off. @t-vi It will help a lot. If

Re: [apache/tvm-rfcs] [RFC]PyTorchTVM (#25)

2021-08-25 Thread Meteorix
> Is this supposed to be useful for accelerating PyTorch training? >From my perspective, we are a little far from supporting PyTorch training. I >can think of 2 possible ways: 1. use torchscript to train 2. use relay to train It seems both ways are in early stage. Maybe we can further discuss th

[apache/tvm-rfcs] [RFC]PyTorchTVM (#25)

2021-08-24 Thread Meteorix
This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and use accelerated module in PyTorch. Initial PR: https://github.com/apache/tvm/pull/8777 Discuss: https://discuss.tvm.apache.org/t/rfc-pytorchtvm-compile-torchscript-to-tvm-and-use-accelerated-module-in-pytorch/10873 Y

[Apache TVM Discuss] [Development/pre-RFC] [RFC]PyTorchTVM: compile TorchScript to TVM and use accelerated module in PyTorch

2021-08-24 Thread Meteorix via Apache TVM Discuss
# Background PyTorch framework is increasingly being adopted for research and production. At the same time, PyTorch lacks an effective inference acceleration toolchain, which is the main concern in the industry. Existing acceleration includes: 1. PyTorch -> ONNX -> TensorRT/TVM 2. PyTorch ->

[TVM Discuss] [Development] Add the document for TVMDSOOp

2020-06-23 Thread Meteorix via TVM Discuss
@tobegit3hub thanks for this great work. I am trying to export an autotuned model with TVMDSOOp. Now, I am stuck at how to register the func_name to the tf_op. ``` with tvm.transform.PassContext(opt_level=3): graph, lib, params = relay.build_module.build( mod, target=target, params=