+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16368#issuecomment-1881947202
You are receiving this because you are subscribed to this thread.
Message ID:
Thanks everyone for the year-long discussion.
I'd love to note that, in respective, we would have already missed the boat of
generative AI, and Apache TVM would have lost momentum of empowering the
community on the up-to-date workloads they are interested in, if we decided to
follow the default
+1 Carefully reading all the response from all the community, I'm even more
convinced that it is the right operational guideline for us to pursue as a
whole community, because to any strategic decision, it helps incorporate the
voice of both sides and sets a clear guideline to pass.
--
Reply t
As a diverse community, there are clearly different opinions on rules and
guidelines of community operation, and it's not unusually that people agree to
disagree. As PMC members, it's our responsibility to take actions to navigate
the way the community operates, hearing voices from the community
Well I mean this is awesome to have MSC, but is there any way we could
potentially split the PR to certain reviewable size?
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15489#issuecomment-1666425211
You are receiving this because you are subscribed to
I was a bit confused, and would love to clarify my points first, and ask
@leandron to clarify some of your points :-)
> It might become too costly for some to keep up with gigantic puzzle of
> features that are just too complicated to make work.
To clarify, according to my read, this RFC is par
+1. I'm in favor of this. A healthy community needs progress and keep updated
to the latest trend, and a super majority vote is for now the best way we could
think of to balance the need for stability while being against stagnation
--
Reply to this email directly or view it on GitHub:
https://g
A full-featured Llama2 implementation in only 200 lines of code based on this
project: https://github.com/mlc-ai/mlc-llm/pull/631
---
[Visit
Topic](https://discuss.tvm.apache.org/t/design-torchy-productive-model-definition-in-tvm-unity/15404/2)
to respond.
You are receiving this because
Let's get this PR merged instead!
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15346#issuecomment-1641599151
You are receiving this because you are subscribed to this thread.
Message ID:
+1 (binding)
Checked:
- checksum and PGP key
- compilation
- tests on TIR, arith, meta schedule, TE
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/15313#issuecomment-1638554186
You are receiving this because you are subscribed to this thread.
Message
Let's fix the lint and get it merged
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15267#issuecomment-1627473770
You are receiving this because you are subscribed to this thread.
Message ID:
@tvm-bot rerun
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15216#issuecomment-1620829033
You are receiving this because you are subscribed to this thread.
Message ID:
@tvm-bot /rerun
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15216#issuecomment-1620816711
You are receiving this because you are subscribed to this thread.
Message ID:
It is worth pointing out that:
* Most of the existing tests are CPU-bound, including those uses GPU for
execution (end-to-end tests), which also relies heavily on CPU for code
generation
* All e2e tests can be decoupled as host-side compilation on CPU + execution on
device (e.g. GPUs)
* Brute
Merged #100 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/100#event-9050809705
You are receiving this because you are subscribed to this thread.
Message ID:
Merged #14646 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/14646#event-9029781448
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/14260#issuecomment-1472386963
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/14129#issuecomment-1445299491
You are receiving this because you are subscribed to this thread.
Message ID:
Any updates?
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/13586#issuecomment-1436263759
You are receiving this because you are subscribed to this thread.
Message ID:
My position:
- Relay and Relax is going to co-exist as parallel submodules in TVM, and one
should not affect the other at all;
- Committed to keeping Relay source code in "main" in the foreseeable future
without hinting about potential deprecation;
- Having Relax in "main" >>> having Relax in a s
Merged #13351 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/13351#event-7817387975
You are receiving this because you are subscribed to this thread.
Message ID:
Merged #13039 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/13039#event-7565945597
You are receiving this because you are subscribed to this thread.
Message ID:
Thanks for the discussion so far! Wearing Apache TVM hat, I would love to see
our community making progress to satisfy broader community and work with the
trend of deep learning compilation, rather than being gridlocked by a single
party of interest.
--
Reply to this email directly or view it
+1 (binding)
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/12651#issuecomment-1233389506
You are receiving this because you commented.
Message ID:
+1 (binding)
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/12651#issuecomment-1231808637
You are receiving this because you commented.
Message ID:
+1 (binding)
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/12583#issuecomment-1226698748
You are receiving this because you are subscribed to this thread.
Message ID:
+1 (binding)
In which circumstances do we need to vote btw? I suppose it’s an approved
RFC and didn’t see objection.
On Wed, Aug 24, 2022 at 15:50 driazati
wrote:
> +1
>
> --
> Reply to this email directly or view it on GitHub:
> https://github.com/apache/tvm/issues/12583#issuecomment-122655662
Thank you @leandron @ekalda for the questions, and @zhiics, @slyubomirsky,
@Hzfengsy, @sunggg for the discussion!
As a long-term contributor since 2018, the pre-Relay era, and the initiator and
top 2 contributors of RAF
([https://github.com/awslabs/raf/](https://github.com/awslabs/raf/)), the
Merged #79 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/79#event-7130575684
You are receiving this because you are subscribed to this thread.
Message ID:
Thanks for following up!
> the parser and IRBuilder are only called for statements and for function
> calls. Is this correct?
In fact we allow registering visitors for any Python AST constructs. For
example, we could specify the behavior when visiting type annotations, function
arguments, etc.
I believe we are all aware the RFC is to support general IRBuilder-based
metaprogramming, and with this context, I would love to address your concerns
as below.
> there is no way to handle different parser rules based on context
Our design handles context-dependent parsing in a unified approach
> So if you define a variable in a quoted portion, you should be able to
> reference it in the unquoted portion
- From quoted to unquoted: Yes, that's correct.
- From unquoted to quoted: For security concern, accessing values from unquoted
portion will require explicit specification if the values
@slyubomirsky Sure! Please see F1 and F2 for existing meta-programming
capability
(https://github.com/yelite/tvm-rfcs/blob/tvmscript-metaprogramming/rfcs/0079-tvmscript-metaprogramming.md#f1-template-metaprogramming),
and see F4 for interleaving python interpreter with the parser. The quotation
To follow up with our latest discussion with @tkonolige @areusch @csullivan
@jwfromm et al.
The following questions are raised in our discussion:
1. Move discussion of vendor IR to tradeoffs / benefits section rather than
core motivation.
2. (Section 2) Parser registration example is a little co
I'm merging this RFC as it seems that our discussion has reached consensus, but
feel free to follow-up any time!
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/74#issuecomment-1162072459
You are receiving this because you are subscribed to this thre
Merged #74 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/74#event-6849482980
You are receiving this because you are subscribed to this thread.
Message ID:
Thanks @areusch for pointing me to the thread! Definitely happy to read the
[discussion](https://discuss.tvm.apache.org/t/export-tir-to-json/12329), and
glad to see that @vinx13 unblocks the `SaveJSON` method for NDArrays :-)
completely agreed that TVMScript could be the usecase which provides
How about we consolidate our discussion to the RFC thread
(https://github.com/apache/tvm-rfcs/pull/79) so that people can see what's
happening in a centralized place?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-tvmscript-metaprogramming/12969/4)
to respond.
You are receiving
Hey I'm happy to discuss more, and let's keep this RFC open until the end of
this week
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/74#issuecomment-1158071311
You are receiving this because you are subscribed to this thread.
Message ID:
@areusch Thanks for following up!
> what's the motivation for someone to use IRBuilder instead of just
> serializing the TVMScript to JSON via parse/print
JSON is mostly for serializing an IR after it's constructed (which users cannot
manipulate), and the TVMScript format is for users to constr
Hey @areusch thanks for elaborating your points and these are all definitely
great questions to me!
> right now i believe we have 3 TIR formats: repr(), TVMScript, and JSON. This
> RFC looks to provide infra that allows for generation of more formats e.g. so
> that Python doesn't have to be the
To summarize offline discussion with @areusch:
Q: Is this going to unify current fragmented printing formats?
A: Yes for TIR. After following the standard deprecation procedure (see Section
"Upgrade Plan"), TVMScript will be the only frontend (i.e. user-facing
printer/parsing) for TIR, while the
Thanks @areusch for your response!
> if TVMScript is a core way in which TIR is used, I'd argue we should treat
> them conceptually as joined (e.g. TVMScript as the recommended roundtrip text
> format for TIR). What are your thoughts there?
Let's phrase it this way: TVMScript serves as a fronte
> Relay has a single roundtrippable serialization format, as do most languages.
> I think we benefit from this in that we only have one set of tests to
> maintain.
To clarify, Relay has two roundtrippable serialization formats: text format and
json. people use text format for readability, and i
Merged #11461 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/11461#event-6694590679
You are receiving this because you are subscribed to this thread.
Message ID:
@Mousius Thank you so much for your response! This makes lots of sense to me!
Also, thanks for including my personal principles in the discussion! It's my
personal principles which are completely okay to disagree with :-)
> I'm not sure why we consider that pollution given it should have a posit
Thanks @Mousius for drafing this RFC!
First of all, I completely agree on the importance to handle `arch`-specific
checks. Use our experience as an example, on CUDA we might want to check if the
PTX intrinsic `cp.async.commit_group` is available on certain architecture
before tensorizing using
it’s definitely legit ask, but I would love to remind that as apache preferred
approach, it would be better if technical discussion remains in an achievable
way, so that the door is always open for all community participants to
understand the technical discussion without having to appear at cert
We are more than happy to collaborate on AutoTIR side to make good things
happen :-)
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/72#issuecomment-1125461620
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/10471#issuecomment-1058494070
You are receiving this because you are subscribed to this thread.
Message ID:
It definitely makes sense for us to reduce the traffic from the github emails.
Github teams is definitely a good idea. I'm supportive 👍
---
[Visit Topic](https://discuss.tvm.apache.org/t/rfc-remove-codeowners/12095/4)
to respond.
You are receiving this because you enabled mailing list mod
This is definitely interesting usecases which unifies "AttrStmt" with
definitions of Attr otherwhere in the codebase. Given `AttrStmt` is something
we wanted to move away from, I would love to confirm with @tqchen that the
change is acceptable
---
[Visit
Topic](https://discuss.tvm.apache
[quote="wrongtest, post:1, topic:12118"]
But certain pragma annotations can not get lowerer to `T.attr`,only those of
expression typed values are allowed
[/quote]
Would you like to elaborate? Currently the type of `AttrStmtNode::value` is
`PrimExpr`, but which type cannot be supported by TVMSc
Merged #39 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/39#event-6092157169
You are receiving this because you are subscribed to this thread.
Message ID:
Thank you all for the discussion @Lunderberg @vinx13 @areusch!
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/39#issuecomment-1043226453
You are receiving this because you are subscribed to this thread.
Message ID:
Thank you @cbalint13 for your kind response! We are super excited to hear about
your work and more than happy to assist/collaborate on TensorIR/MetaSchedule!
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/8473#issuecomment-1022778418
You are receiving
Hey @cbalint13 thanks for asking! Absolutely!
> Was Auto Tensorization removed form this list (was at section [M4b] if I
> recall), what was/is the plan with ?
The only reason is that I'm trying to organize the roadmap. Auto tensorization
is a huge item and we want to have a separate tracking i
CC: @yzhliu @yzh119 @comaniac
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/51#issuecomment-1008200541
You are receiving this because you are subscribed to this thread.
Message ID:
CC: @zxybazh @comaniac @merrymercy
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/9875#issuecomment-1007900534
You are receiving this because you are subscribed to this thread.
Message ID:
Thanks for the proposal! We are very interested in improving search algorithms
and cost model. I was very excited to read about FamilySeer a week ago.
In terms of the subgraph similarity, AFAIK @comaniac and @zxybazh have been
working independently on this topic to improve overall search time
@cxy would you like to update the pre-RFC according to our discussion? Thanks a
lot!
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-runtime-bring-packedfunc-into-tvm-object-system/11816/9)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe f
Let's leave this pre-RFC open for a week, and then send a formal RFC with
clarifications to https://github.com/apache/tvm-rfcs/
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-runtime-bring-packedfunc-into-tvm-object-system/11816/8)
to respond.
You are receiving this because you e
To summarize our offline discussion with @areusch @tqchen.
Clarification:
1. This RFC doesn't change any of the existing functionality, including C ABI
or PackedFunc's C++ API. Any modification to the C ABI is out of scope of this
RFC.
2. Calling a PackedFunc inside TVM codebase directly uses
Yeah I was on vacation and didn't track closely. Sorry for the confusion!
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-top-byoc-intel-libxsmm-integration/11688/18)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
I'm happy to shepherd this RFC
CC: @spectrometerHBH @tqchen @areusch
---
[Visit
Topic](https://discuss.tvm.apache.org/t/bring-packedfunc-into-tvm-object-system/11816/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
he
Shall we conclude this pre-RFC and send a formal RFC to
https://github.com/apache/tvm-rfcs/?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-top-byoc-intel-libxsmm-integration/11688/15)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from
Another idea @zxybazh and I have been discussing is the possibility to unify
Python and CXX logging system via packed functions, i.e. the CXX logging system
could potentially call back to python's logging module
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-better-tvm-logger-in-c
This is definitely extremely important! I would love to further discuss with
you on the environment variables used here. Do you think it’s possible to unify
DMLC_ and TVM_ environment variables? Is it possible to minimize the number of
env variables to use? Is there any precedent we could refe
Looks like it could be abstracted as calling a packed function...On the
low-level you may use `call_packed` as demonstrated
[here](https://github.com/apache/tvm/blob/main/tests/python/unittest/test_te_tensor.py#L187);
CC @yuchenj on the high-level IR
---
[Visit
Topic](https://discuss.tvm
Closed #9566.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/9566#event-5668430772
The release candidate v0.8.rc0 is approved:
* Voting thread: https://github.com/apache/tvm/issues/9504
* Voting result: https://github.com/apache/tvm/issues/9566
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.
Closed #9416.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/9416#event-5668327879
Release candidate is approved:
* Voting thread: https://github.com/apache/tvm/issues/9504
* Voting result: https://github.com/apache/tvm/issues/9566
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tv
Thanks @electriclilies! The typo is fixed :-)
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/9504#issuecomment-975909111
It's not supported yet in TensorIR, but will be a good thing to have in the
future :-)
---
[Visit
Topic](https://discuss.tvm.apache.org/t/discuss-embed-more-bound-information-into-var-or-expr/4079/33)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscri
using assert_stmt is a valid approach
---
[Visit
Topic](https://discuss.tvm.apache.org/t/discuss-embed-more-bound-information-into-var-or-expr/4079/31)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discus
Dear TVM community,
This is a call for vote to release Apache TVM version 0.8.0. This is a major
release with many new features and improvement. All users of Apache TVM 0.7 are
advised to upgrade. The release is co-managed by Wuwei Lin (@vinx13).
Link to release notes:
https://github.com/apac
# Apache TVM v0.8 Release Note
- [Overview](#overview)
- [Accepted RFCs](#accepted-rfcs)
- [Features and Improvements](#features-and-improvements)
- [TE, TIR, TVMScript](#te-tir-tvmscript)
- [AutoTVM, AutoScheduler, Meta
Schedule](#autotvm-autoscheduler-meta-schedule)
- [Operato
Merged #9503 into v0.8.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/9503#event-5611479918
CC @vinx13 @tqchen
You can view, comment on, or merge this pull request online at:
https://github.com/apache/tvm/pull/9503
-- Commit Summary --
* https://github.com/apache/tvm/pull/9503/commits/0782f083b328f61dca71a7bb4f1d7c26c85feb27";>[Release]
Bump version to 0.8.0
* https://github.com
What about we define a new target kind:
```
{
"kind": "packaged", # probably need a better name, please propose new ones
"runtime": "crt", # the "runtime" in the proposal
"executor": { # the codegen target for relay function
# i.e. the "executor" in the propos
@Mousius I totally agree to make things hygiene, and believe folding things
into Target is the correct and consistent approach.
First of all, the automation system solely relies on the target object to
understand the code dispatching, hardware specs and runtime information.
Without having the
@areusch and I had long discussion yesterday offline, and he helped me
understand the concern from the UX perspective: If we fold executor into
target, then it's more difficult to separate the config coming from two
parties, where one party impl the codegen and the other impl the executor.
On
Just re-re-triggered the CI. Seems very flaky
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/9488#issuecomment-966017216
You can view, comment on, or merge this pull request online at:
https://github.com/apache/tvm/pull/9488
-- Commit Summary --
* https://github.com/apache/tvm/pull/9488/commits/592860293ce1016d5d6544c0db47a5c20fc4";>add
PGP into KEYS
* https://github.com/apache/tvm/pull/9488/commits/70
Blocker #9486
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/8976#issuecomment-965562565
The release is cut and is available for test in
https://github.com/apache/tvm/tree/v0.8
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/8976#issuecomment-964264280
@jiangjiajun Yes, we will update v0.8 branch and cut a release candidate on Nov
8, 2021. After the cut, we will ask the community and PMC members to test to
release, and if there is no regression we will make the release official
--
You are receiving this because you are subscribed to this thre
@jiangjiajun Yes, we will update v0.8 branch and cut a release candidate on Nov
8, 2021. After the cut, we will ask the community and PMC members to test to
release, and if there is no regression we will make the release official
--
You are receiving this because you are subscribed to this thre
Thank you @Mousius for the RFC! It's great to read about potential user
experience issues of the current Target system, and happy to discuss about
potential ways to improve it.
## Proposeds API in the RFC
`CompilationConfig`, as proposed in this RFC, aims to improve UX by wrapping a
list of
Blocker: We need this bugfix in to address a regression
https://github.com/apache/tvm/pull/9421
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/8976#issuecomment-958011393
> Should we wait for PyTorch TVM PR #8777? It should be merged soon.
@masahi we can wait for it if this PR could get in this week
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/9416#is
Thank you @masahi for helping edit the description for Vulkan! It looks pretty
nice to me :-)
Thanks @jiangjiajun for proofreading the PaddlePaddle-related text. Yep these
commits were not there a month ago when we collected the initial changelog
draft. Thanks to @vinx13, who acted swiftly and
@masahi @Lunderberg Yeah I totally agree! Would you guys suggest more details
like "improved vulkan backends on ..."? Thanks a lot!
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/9416#i
Also, if there is any bug/issue blocking the release, please don't hesitate to
let us know in this thread :-)
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/8976#issuecomment-956449578
Hi all, we cut a v0.8 release branch for Apache TVM:
https://github.com/apache/tvm/tree/v0.8. Please find:
- The release note (candidate): https://github.com/apache/tvm/issues/9416
- The full changelog (candidate):
https://gist.github.com/junrushao1994/c669905dbc41edc2e691316df49d8562
There have
# Apache TVM v0.8 Release Note
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/9416
@Mousius Thanks for asking!
> does this mean that 0.8 will go out with half finished implementations for
> things, such as library integrations (i.e. CMSIS-NN) and tvmc arguments (tvmc
> is not yet stable as there's breaking changes incoming)
Yes, we directly cut main into the v0.8 branch:
htt
@jiangjiajun Sure! We will list experimental paddlepaddle frontend support as a
separate category and a highlight of this release
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/8976#issu
@areusch thanks for asking! sorry I was in vacation by the time of the post.
@vinx13 and I are actively drafting a release note, and will cut a release
candidate by next Monday (Nov 1, 2021). If there is any small commit after
Monday that needs to be included in the RC, please let us know in thi
1 - 100 of 272 matches
Mail list logo