Foundation models are important workloads, and by pushing the local/server
inference of LLMs to the extreme in the TVM stack, I believe we can push the
resolution of pain points to a new stage for people to use TVM as THE deep
learning compiler in general scenarios, which is necessary for us t
@ziheng Sorry for my late reply. The code of `tvm.build` in build_module.py on
master branch is attached below.
After we define the TIR PrimFunc with the script, we should be able to put it
inside an IRModule and using `tvm.build` as normal.
```python
if isinstance(inputs, schedule.Schedule)
[quote="merrymercy, post:37, topic:7872"]
I mean the original TE is a declarative language so it can know all
transformation before it starts to generate low-level AST. But the new schedule
primitives are done imperatively. In the original TE, we can share some
analysis results (e.g. dependenc
No matter which option we take, do we have to discriminate between function and
class when annotating with decorator?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-rename-hybrid-script/7915/12) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubsc
`tvm.script` looks good to me.
---
[Visit Topic](https://discuss.tvm.apache.org/t/rfc-rename-hybrid-script/7915/5)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/email/unsubscribe/542
Thanks for your reply! @MinminSun
The cache_read/cache_write API accepts a Buffer and new scope as input, do some
checks to ensure it brings no problem to read/write the Buffer into cache, and
create new blocks to do the cache transfer.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/
Thanks for your reply! @kevinthesun
[quote="kevinthesun, post:9, topic:7872"]
Thank you for this proposal! This work does make scheduling much easier. I have
a concern about using this way to write a tensor expression. It looks like more
complicated than tvm.compute when defining matmul. We
[quote="ds1231h, post:3, topic:7872"]
However, will this increase the coupling between the schedule and the lower
pass, which may lead to an increase in the complexity of the lower pass?
[/quote]
Thanks for your reply! @ds1231h
At the moment, we at first transform TIR with block to TIR without
Thanks for your reply! @jcf94
A1. We've tried to tensorize intrinsic using this new IR, and are working on
the TensorCore demo. Our design is really close to the original tensorize
programming logic, only differs in the declaration of
description&implementation of HW intrinsic (we can use Hy