Re: [apache/tvm] [VOTE] Transition Main to Unity (Issue #16368)

2024-01-16 Thread Bohan Hou
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/16368#issuecomment-1894017338 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [VOTE] Clarify Community Strategy Decision Process (Issue #15521)

2023-08-10 Thread Bohan Hou
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/15521#issuecomment-1673987644 You are receiving this because you are subscribed to this thread. Message ID:

[Apache TVM Discuss] [Development] [DISCUSS] TVM Community Strategy for Foundational Models

2023-07-27 Thread Bohan Hou via Apache TVM Discuss
Foundation models are important workloads, and by pushing the local/server inference of LLMs to the extreme in the TVM stack, I believe we can push the resolution of pain points to a new stage for people to use TVM as THE deep learning compiler in general scenarios, which is necessary for us t

Re: [apache/tvm] [VOTE] Release Apache TVM v0.13.0.rc0 (Issue #15313)

2023-07-17 Thread Bohan Hou
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/15313#issuecomment-1638552753 You are receiving this because you are subscribed to this thread. Message ID:

[apache/tvm] [release] Bump version numbers to 0.13.0 (PR #15216)

2023-07-03 Thread Bohan Hou
This bumps all the version numbers on v0.13.0 branch as correct v0.13.0. You can view, comment on, or merge this pull request online at: https://github.com/apache/tvm/pull/15216 -- Commit Summary -- * Bump version numbers to 0.13.0 -- File Changes -- M conda/recipe/meta.yaml (2) M

Re: [apache/tvm] [Release] v0.13.0 release schedule (Issue #15134)

2023-07-03 Thread Bohan Hou
@ysh329 cc: https://github.com/apache/tvm/pull/15216 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/15134#issuecomment-1619160292 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [Release] v0.13.0 release schedule (Issue #15134)

2023-07-02 Thread Bohan Hou
> The new branch and tag is now ready. -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/15134#issuecomment-1616748355 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [Release] v0.13.0 release schedule (Issue #15134)

2023-06-23 Thread Bohan Hou
I volunteer to help manage the release -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/15134#issuecomment-1604681741 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm] [VOTE] Establish TVM Unity Connection Technical Strategy (Issue #12651)

2022-08-31 Thread Bohan Hou
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/12651#issuecomment-1233667253 You are receiving this because you commented. Message ID:

Re: [apache/tvm] [TIR][REFACTOR][RFC] ForNode -- Introduce Annotations and ThreadBinding to for_type (#7302)

2021-01-18 Thread Bohan Hou
The problem is that launch_thread was not designed to print those additional hints,since launch_thread was syntax for Attr now. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/7302#issue

[Apache TVM Discuss] [Development/RFC] [RFC] Hybrid Script Support for TIR

2020-09-30 Thread Bohan Hou via Apache TVM Discuss
@ziheng Sorry for my late reply. The code of `tvm.build` in build_module.py on master branch is attached below. After we define the TIR PrimFunc with the script, we should be able to put it inside an IRModule and using `tvm.build` as normal. ```python if isinstance(inputs, schedule.Schedule)

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-22 Thread Bohan Hou via Apache TVM Discuss
[quote="merrymercy, post:37, topic:7872"] I mean the original TE is a declarative language so it can know all transformation before it starts to generate low-level AST. But the new schedule primitives are done imperatively. In the original TE, we can share some analysis results (e.g. dependenc

[Apache TVM Discuss] [Development/RFC] [RFC] Rename Hybrid Script

2020-09-21 Thread Bohan Hou via Apache TVM Discuss
No matter which option we take, do we have to discriminate between function and class when annotating with decorator? --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-rename-hybrid-script/7915/12) to respond. You are receiving this because you enabled mailing list mode. To unsubsc

[Apache TVM Discuss] [Development/RFC] [RFC] Rename Hybrid Script

2020-09-15 Thread Bohan Hou via Apache TVM Discuss
`tvm.script` looks good to me. --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-rename-hybrid-script/7915/5) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/542

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-13 Thread Bohan Hou via Apache TVM Discuss
Thanks for your reply! @MinminSun The cache_read/cache_write API accepts a Buffer and new scope as input, do some checks to ensure it brings no problem to read/write the Buffer into cache, and create new blocks to do the cache transfer. --- [Visit Topic](https://discuss.tvm.apache.org/t/

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-11 Thread Bohan Hou via Apache TVM Discuss
Thanks for your reply! @kevinthesun [quote="kevinthesun, post:9, topic:7872"] Thank you for this proposal! This work does make scheduling much easier. I have a concern about using this way to write a tensor expression. It looks like more complicated than tvm.compute when defining matmul. We

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-10 Thread Bohan Hou via Apache TVM Discuss
[quote="ds1231h, post:3, topic:7872"] However, will this increase the coupling between the schedule and the lower pass, which may lead to an increase in the complexity of the lower pass? [/quote] Thanks for your reply! @ds1231h At the moment, we at first transform TIR with block to TIR without

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-10 Thread Bohan Hou via Apache TVM Discuss
Thanks for your reply! @jcf94 A1. We've tried to tensorize intrinsic using this new IR, and are working on the TensorCore demo. Our design is really close to the original tensorize programming logic, only differs in the declaration of description&implementation of HW intrinsic (we can use Hy