Dear community:
As we continue to grow the codebase and community, it would be helpful for us
to also
update code review guideline to help us get common ground in the healthy
collaborations.
This thread proposes to update the code review guideline to
@jroesch, myself and many others drafted
Hey @manupa-arm, to clarify, Relay integration is probably not happening prior
to meta schedule, but along with meta schedule (see M4a of meta schedule
timeline). Right now we can use either #7987 or just handwrite TensorIR to play
with the scheduling/codegen code path
--
You are receiving thi
Closed #5519.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/5519#event-5220060096
@u99127 I am doing triage on old PRs, going to close this, please feel free to
follow up if you would like to still merge these changes. Thanks for your
contributions!.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https
It could easily fit into the getting started section, a “Documentation Guide”
which lays out the organization and motivation, as well as how to contribute.
---
[Visit Topic](https://discuss.tvm.apache.org/t/updated-docs-pre-rfc/10833/23)
to respond.
You are receiving this because you enab
Thanks @hogepodge .
Placing higher-level tour into Getting started would indeed help us incorporate
elements from L2 into this framework and combines the concern of M0 and M1. As
a result, we could start with L3 and continue to improve our docs.
As part of the docs, it would be great to capt
Ack, Many thanks for the info 🙂!
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/7527#issuecomment-907200410
I wonder whether this would make the torch fallback op
(https://github.com/apache/tvm/pull/7401) more or less useful (it would depend
on what you (plan to) do with unsupported ops). I am still pondering whether to
close it or dust it off.
I should note that as far as I know NVidia has a TensorR
Hey @manupa-arm,
Don't worry. We will make TensorIR be an optional but not the default backend
of relay as our first step. There must be many works (including meta schedule
and some corner cases that meta schedule can not generate automatically) to do
before totally switching TE to TensorIR.
Hey @junrushao1994 ,
Thanks for the clarifications.
Since the relay integration is supposed to be happening prior to meta schedule
is being concluded, what would be the default 'schedule' (or maybe in the
context of TensorIR: default set of scheduling passes) used in a relay.build
flow ?
--
Hey @manupa-arm thanks for your interest!
> Will the integration be using #7987 ?
Most of the operators are defined with the TE DSL in TOPI, so in these cases we
will definitely use #7987, which converts TE compute dag to TensorIR.
> If you guys have decided, please let us know what other the A
Ack. Thanks.
Out of curiosity for the planned relay integration,
* Will the integration be using #7987 ?
* If you guys have decided, please let us know what other the APIs (at least
initially) be used to create the high-level non scheduled Primfunc?
* Will it include rewriting schedules in TOPI
Thanks @MeeraN7 @giuseros, I like the approach of making the vectorized loop
explicit with `VL` parameter at the TIR level, in contrast to how the
fixed-width vectorization is done today.
If possible, I think it is better not to introduce user facing changes, since
as far as an API is concerne
13 matches
Mail list logo