@Lunderberg Hi, I am much interested in `transform_layout` but my team
depends totally on TensorIR schedule instead of TE. Could you kindly provide
more design points on TensorIR side? It would be great if we can enjoy this
preview feature in TensorIR. It is really useful for us.
We have imp
Thanks a lot! I think then we can handle buffer related issues in customized
passes with more explicit and robust way.
I have one question on tir script, for certain algorithms in DL workloads,
users may want to write non-stir formed script like
```python
x = T.allocate((), "int32", "")
x
reuse T.alloc_buffer seems good,as long as there is no ambiguity for parser
impl :)
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/70#issuecomment-1149935097
You are receiving this because you are subscribed to this thread.
Message ID:
Thanks for the all great discussions! It is so excited that we will have a more
powerful ability to handle all things like paddings and imperfect tiles.
Since our team rely on the code path of s-tir, we are extremely interested in
the story on s-tir. I would be very appreciated if we have some d
Hi~ here are my two questions :)
cc @kparzysz-quic
- > 2\. Make vector length a parameter to `stage.vectorize`.
What is the different between
- `sch[C].vectorize(v, vector_length=32)` and
- `vo, vi = sch[C].split(v, 32)` then `sch[C].vectorize(vi)`
It seems that we could als
In Intellif, people build, maintain and extend the DL compilation stack with
Relay in past years. However, we never think the upstreaming of a new module
would break existing functionalities or cause confusions, but huge
opportunities to solve many technical issues which prove to be not so easy
Hello there. The idea is just same with existing IR pass described in
https://discuss.tvm.ai/t/discussion-new-ir-pass-proposal-combineparalleldense/3813
by @jonso . Many sequential network structures conduct group of matmul
operations on same input tensor such as
- gate projections on state
As there are more and more demands on TVM's training support, one of the most
tedious but important work is to write backward implementation for operators.
It may take great benefit if we can provide automation tools to help this
process. Such tool can serve in two functionalities:
- Automati
Glad to see autodiff is already in progress! I think this rfc can be withdrew
since this is exactly what autodiff is doing.
Now I am very curious about current progress of autodiff with some questions.
- If I have some common neural network structure such as resnet50 at hand, can
I just use a
Hi, all~
This RFC is to upstream the support for our TY-NNP accelerator backend. We are
from the AI accelerator toolchain team of
[Intellifusion](https://www.intellif.com/), who has been focusing on developing
vision processor that accelerates deep neural networks in visual recognition
and s
Thanks for your comments:)
[quote="areusch, post:3, topic:11807"]
could you say more here? is this a Relay-level thing or a TIR thing? presuming
you’ve implemented this as a pass, how do you plan to ensure that the
Relay-level pass makes the same scheduling decision as the TIR pass?
[/quote]
[quote="areusch, post:3, topic:11807"]
it seems like this could either be integrated into `ci-cpu` or as a separate
`ci-` image, so long as the binaries are publicly available. do you have an
estimate of the size of the docker image? also, just for my curiosity, would
you be able to share a ro
@mbs-octoml Hi~ Many thanks for your reply! Here are several questions of me:
1. What does `call_lowered` mean? Does it mean we can put PrimFuncs and relay
functions into the same IRModule and make calls to each other now?
2. For the `VirtualDevice`, it would be the interface to keep all info
Schedule annotations of `For` and `Block` are all Map. But
certain pragma annotations can not get lowerer to `T.attr`,only those of
expression typed values are allowed.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/can-we-lift-tir-attrstmt-value-type-to-objectref/12118/1)
to respond
Hi~ I think this is not the issue of tvmscript. For example, though
`List[Integer]` is supported by script, it would fail in lowering with `Illegal
attribute of key pragma_key, value type Array not supported`, since the
annotation can not convert to an attr stmt.
```python
import tvm
from t
15 matches
Mail list logo