Thanks for the nice RFC @Meteorix! It is definitely super important direction
to embed TVM into PyTorch. I will read it carefully later this week.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm
cc @t-vi
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/25#issuecomment-904951973
Thanks @AndrewZhaoLuo
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/6#issuecomment-904813648
Merged #6 into main.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/6#event-5201897255
This RFC add a `PyTorchTVM` module to support: compile TorchScript to TVM and
use accelerated module in PyTorch.
Initial PR: https://github.com/apache/tvm/pull/8777
Discuss:
https://discuss.tvm.apache.org/t/rfc-pytorchtvm-compile-torchscript-to-tvm-and-use-accelerated-module-in-pytorch/10873
Y
This is a great disscussion here. Actually, we are supporting a DSA with TVM,
let me share my practice.
1, We only re-use some of tvm relay or tir passes, less than 10 passes, such
storage flatten, we don't need most of tvm passes, keep them in our flow means
wasting compilation time.
2, We dev
It looks like transformer like models have many `softmax` ops that introduce a
lot of casting before / after them, like
https://gist.github.com/masahi/0d7d96ae88722b616a906cec2054559e#file-transformer-txt-L137-L143
The fact that softmax and the following cast to fp16 are not fused surprised
me.
Just coming back to this thread, I believe there's a way to introduce the hooks
in a less intrusive way to the `LowerTEPass`, by placing it just above it in
`build_module.cc`. This should mean each Target can register a `relay_to_tir`
`Pass` which is ran there rather than having to wire it via
CC: @manupa-arm @grant-arm @areusch @stoa @MJKlaiber
---
[Visit Topic](https://discuss.tvm.apache.org/t/pre-rfc-c-device-api/10874/2) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/em
# Summary
[summary]: #summary
I want to write an RFC to provide an API which can be used by the C runtime to
abstract the variety of driver APIs for different platforms. This is
specifically catering towards RTOS abstractions for embedded device drivers.
# Motivation
[motivation]: #motivation
Just read through this and providing my own opinions. I'm a huge fan of L2 -
Tour Style here, and I appreciate that it blends topics such as TVM and
microTVM in the beginning rather than treating them as separate; it makes a lot
of sense to me to use this to ensure we make all of TVM approacha
# Background
PyTorch framework is increasingly being adopted for research and production. At
the same time, PyTorch lacks an effective inference acceleration toolchain,
which is the main concern in the industry. Existing acceleration includes:
1. PyTorch -> ONNX -> TensorRT/TVM
2. PyTorch ->
12 matches
Mail list logo