While discussion continues I am asking that we hold off on voting so that all
active contributors have a chance weigh in and comment. I know of at least two
PMC members with perspectives to share that cannot comment until mid November.
Holding off a few weeks seems like the right tradeoff for ma
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/12103#issuecomment-1185206169
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/11415#issuecomment-1138938468
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #6620.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/6620#event-5919865107
You are receiving this because you are subscribed to this thread.
Message ID:
This PR appears to be out of date, please feel free to reopen it if this is not
the case.
As part of the new year we are attempting to triage the project's open pull
requests to ensure that code which
is ready for review and/or merging receives adequete attention.
Thanks again for your contr
Thanks for your contribution. When you have a chance could you update the
description of the PR with more details, and potentially link to the pre-RFC
discussion? Thanks!
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
http
Closed #7526.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/7526#event-5688955755
Ok Mark will split out into follow up issues that go on the roadmap and we will
close this one. cc @Mousius
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/7526#issuecomment-982229451
cc @mbs-octoml @denise-k is it possible for you guys to take over this one as
you now have the most context?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/7526#issuecomment-982090940
Please join us to welcome @Mousius as a new committer to TVM.
Christopher has made great contributions to ARM's work on CMSIS and Ethos-U
support and the infrastructure around it. He has contributed to many reviews
and RFCs around uTVM and the related features. He is also one of the most
acti
@mbs-octoml can we just put a backlog item on fixing the tutorial? going to
merge for CI
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/9076#issuecomment-925975088
I read everything and will respond in depth in the morning after I have thought
about it for a bit. I see both of the concerns here.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/22
See: https://issues.apache.org/jira/browse/INFRA-22324 for more context.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/9057#issuecomment-923462200
Dear Community:
We recently started using GitHub's CODEOWNERS to assign reviewers automatically
but many committers have complained that they are struggling with the default
settings assigning far too many pull requests to far too many people and not
providing fair scheduling across all reviewe
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/8928#issuecomment-917238307
Closed #5519.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/5519#event-5220060096
@u99127 I am doing triage on old PRs, going to close this, please feel free to
follow up if you would like to still merge these changes. Thanks for your
contributions!.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https
Thanks for the work on this, after thinking about it for a while I believe the
proposed hooks runs counter to goals of the TECompiler and "unified lowering"
refactor that we've been working on in pieces. Our design goal is to not allow
arbitrary customization of "lowering" but instead seal it b
LGTM thanks @comaniac
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/13#issuecomment-887696449
Merged #7518 into main.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/7518#event-4992785237
I think we just need to resolve @tqchen's comment since I think the request was
brought up by @areusch and we can merge.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/2#issuecomment
cc @mbaret
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/7518#issuecomment-805600377
Modulo some left over polish work and documentation I think this is ready for
review @icemelon9 @comaniac @csullivan @tkonolige @rkimball @junrushao1994
@areusch @mehrdadh
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
ht
Need to port fix from #7703 but otherwise ready for review.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/7518#issuecomment-805594042
At the time we still had to support 2.7 which in retrospect we should of pushed
harder for everyone to move to newer version of Python. The Python community is
glacially slow at upgrading.
@kazimuth I don't think we should have to make the stub generation
non-deterministic.
In the previous w
Tracking issue for
[te_compiler](https://discuss.tvm.apache.org/t/rfc-relay-tecompiler-rewrite-existing-compile-engine-to-match-updated-compiler-flow/9233).
### About tracking issues
Tracking issues are used to record the overall progress of implementation and
relevant issues.
A tracking issue i
I agree with the change, internally it would be nice to drop the TVM prefix as
well and just refer to it as a `tvm::Device` like we do with almost every
other data structure in the code base.
Thanks for doing this Haichen!
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-rename-tv
Merged #7331 into main.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/7331#event-4259183187
@tqchen these tests are only ran on CPU so I should only have to update that
image right?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/6886#issuecomment-729304036
Looks good to me, was on vacation for a few days, thanks for the fix!
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/6886#issuecomment-729257691
@zhiics you will probably have to rebase due to auto scheduling CI corruption
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/6719#issuecomment-713883002
+1 (binding)
* Checked the code compiles
* Checked License and Notice
* Version
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/6622#issuecomment-703882812
@calvin886 ONNX was very different then it is today back in 2017 when we began
work on Relay. Furthermore ONNX is designed as an interchange format and has
very little of the utilities that come with compiler IRs, these days Relay has
many more extensions then ONNX including closures, data struc
I don't think having a single decorator makes sense given that the two scripts
will require disambiguation with separate decorators. My recent syntactic
rewriting work on Relay shares very little concrete syntax with TIR. Either way
we will probably need `tvm.script.tir` and `tvm.script.relay`
I think I addressed all the comments, and bumped the CI.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/6437#issuecomment-691357110
This is blocked on #6448 and #6451, once we land those two it should be
possible to add the checking to the CI, format once more and land this.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incub
@tqchen recommended that we first format the entire code base using these
settings then try to land the CI parts, going to open a second PR with the
fully formatted repo.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
htt
@junrushao1994 @comaniac @areusch I just added the scripts and cleaned some
things up, take another pass if you can
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/6437#issuecomm
@areusch @tqchen @comaniac I can rollback the formatting, the first 3 or 4
commits were focused on formatting then I went through the process to see if it
would actually work.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
There is an underway effort by the community members to do a binary release of
TVM with linked dependencies under the name `tlcpack`. The goal of these
packages are to include TVM linked with components that do not have open source
friendly licenses. My understand is that its official release
I was able to recreate a version of the script using black's functionality see
PR: https://github.com/apache/incubator-tvm/pull/6437.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-introduce-automatic-formatting-of-python-code/7843/12)
to respond.
You are receiving this because y
cc @tqchen @areusch @comaniac @u99127 @jwfromm @mbrookhart @junrushao1994
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/6437#issuecomment-689997944
As per the recent RFC
https://discuss.tvm.apache.org/t/rfc-introduce-automatic-formatting-of-python-code/7843/10.
You can view, comment on, or merge this pull request online at:
https://github.com/apache/incubator-tvm/pull/6437
-- Commit Summary --
* Add CI boilerplate for black
* Work on
@tqchen Is your worry performance here? The reason we use Black on other code
at Octo for example is that its push button on the entire code base like gofmt
and rustfmt.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-introduce-automatic-formatting-of-python-code/7843/10)
to respo
Relatively recently in TVM's development we introduced the use of clang-format
to all of our C++ code. Overall I think this has been a huge win and makes it
much simpler to maintain consistent style across the code base and reduces the
amount of time spent fighting the linter in CI.
I propose
@comaniac if we land the final error reporting PR it removes the existing error
reporting from type checker completely, I think we should either choose to ship
it this release or delay until next release. One worry is that there will
probably be a period of stability where we iterate/polish on n
I think it makes sense to be stricter. :+1:
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-savetofile-file-name-format-expected-behavior/7741/8)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.a
Ack
On Thu, Aug 27 2020 at 8:31 PM, Zhi Chen < csw...@gmail.com > wrote:
>
>
>
> ACK
>
>
>
> On Thu, Aug 27, 2020 at 5:53 PM Henry Saputra
> wrote:
>
>
>>
>>
>> Hear ya
>>
>>
>
>
>
> >
>
>
>>
>>
>> On Thu, Aug 27, 2020 at 10:37 AM Dave Fisher wrote:
>>
>>
>
>
>
> >
>
>
+1 (binding)
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/6332#issuecomment-679409069
+1 exciting times, agree with what everyone else has already said!
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/6299#issuecomment-676731772
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/5947#issuecomment-651300572
+1
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/5939#issuecomment-650484640
I think fp32 makes sense, the problem is that NumPy defaults are arch. specific
and on 32-bit platforms the default sizing for both integers ("int") and
floating point ("float") are different. A problem that has plagued us
repeatedly. The behavior of `const` iirc is trying to normalize away th
Sorry for the lack of updates been super oversubscribed lately (in the process
of finishing my thesis) plus lots of stuff at OctoML. @mbrookhart is a hero
and has a production quality version he has been working on in C++. I think he
is getting really close to shipping a first version of the
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/5102#issuecomment-601368458
Please welcome Arnaud Bergeron (@abergeron) as a TVM reviewer. He has been an
active user of Relay and has contributed quite a few bug fixes and helped
discuss and shape some of the Relay APIs as a user, as well as contributing
Conda support and numerous ops.
- [Commits
History](https://github
> I am currently working on some end-to-end model stuff, and Relay's
> optimization pass are too slow. Any plans on improving the compilation speed?
Can you identify why they are slow? For example switching some passes to the
new iterative graph visitor might improve speed quite a bit. In previo
After one cycle of deprecation it is now time to start the removal of legacy
NNVM code. NNVM's existence in the repo is preventing some refactoring to occur
for @icemelon9's new dynamically sized kernel generation work.
I know we still have some external consumers of NNVM and we should be mindf
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4443#issuecomment-559900647
Merged #4345 into master.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/4345#event-2804131179
I think if we look at my recent PR we need to probably track the device context
when we allocate storage. The storage's context will prevent merging different
pieces of storage.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitH
+1
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4162#issuecomment-544754307
Besides mutually recursive globals deferring type checking provides no
benefits, code that doesn't type check is not a valid Relay program and can not
be used to do anything, analysis, optimization, or code generation.
We can defer type checking to the first pass, but I don't see it providing
I have begun to experiment with writing a new library called `astgen` to
replace the large quantity of boilerplate required by the AST today, and enable
us to more flexibly evolve the node system, and its APIs.
The first version of this tool will take a Python file like this:
```python
import as
Can you clarify the example above? the simplifications seem invalid to me, how
can you drop the addition by 3?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3478#issuecomment-507935931
I talked with Zach DeVito from PyTorch team for a while about RefCounting,
there are quite a few benefits to using reference counting. We should probably
just use weak refs, solutions from Counting Immutable Beans (a recent paper by
my MSR collaborator where they do much better than GC languages
@ajtulloch @icemelon9 and I have been quietly hacking on a prototype the last
few months but have been busy with other things (such as VM 😄 ). We are going
to start pushing now, I opened a draft PR which will contain type checking
changes, and we will follow-up with code generation next.
One th
Currently a draft PR, see related RFC #3042.
This PR will only contain the type checking changes to Relay to support Any.
@icemelon9 and I will follow up with the related code generation PRs.
You can view, comment on, or merge this pull request online at:
https://github.com/dmlc/tvm/pull/3221
The problem with unification of the values is that closures fundamentally have
a different representation and can't use the runtime's because they must store
a NodeRef (i.e the code).
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it o
# Supporting Dynamic Dimensions
I recently opened an RFC proposing a new dynamic runtime (see #2810).
A missing piece of the puzzle for supporting fully dynamic models is typing,
and code generation for tensors with statically unknown shapes.
There are three critical steps to supporting dynami
Do we need to commit the Verilog code? isn't generated
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/pull/3010#issuecomment-482743446
Haven't had the chance to read the full post yet, but wanted to just say this
looks great, and the kind of things we need more of! We've been chatting about
outlining a new structure for the docs and I think this kind of technical
documentation would be good to put into a dev's guide to workin
+1
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2973#issuecomment-481488643
👍
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2994#issuecomment-481482279
Change the name of RelayPrint to AsText to mirror the higher level API (see
#2955).
You can view, comment on, or merge this pull request online at:
https://github.com/dmlc/tvm/pull/2984
-- Commit Summary --
* Rename RelayPrint to AsText
-- File Changes --
M include/tvm/relay/expr.h (1
This is one of the errors from the code produced by compiling TVM's LLVM
output. It looks like you are most likely corrupting the DLTensor pointer in
some way.
`int32(arg0.shape[0]) == 33554432`, this part is complaining that the first
dimension of the first argument's shape is wrong.
What a
@merrymercy I'm less interested in LOC and more how much conceptual burden
there is. What are the key pieces that make up a backend description is more my
question.
I looked over the code but was at SysML and have two deadlines this week so I
haven't had a chance to really look it over. Look f
@merrymercy how much work is there per backend? looking over the code now will
follow up with more questions later.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2954#issuecomment-4793000
I think we should consider it, I think having the tuner sit in Python is
okay the more important bit being the schedules and other compiler pieces
in C++ for integrating the
compiler. I talked with some PyTorch people today and they suggested a
Python free version of the compiler would be important
It should be disabled by default, it is set at optimization level 2, so I'm not
sure why it is executing.
Can you try:
```
with relay.build_module.build_config(opt_level=2):
graph_json, lib, params = relay.build_module.build(...)
```
---
[Visit
Topic](https://discuss.tvm.ai/t/onnx-mode
This looks like someone introduced a bug or regression into the alter layout
pass, could you open an issue against dmlc master so we can CC the appropriate
people to work on this. You can try to turn
off the alter-layout optimization if you want to make progress.
---
[Visit
Topic](https:/
I posted a draft PR of the VM so everyone can check it out, provide feedback,
and play with the prototype I discussed. See #2889. I'll be polishing it over
the next few days.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
The implementation of the hardware is not of interest to the high-level Relay
program all Tensor to Tensor functions are black box. They can be implemented
anyway you want, in C++, in TVM, or as a hardware accelerator primitive. If you
want to map a subset of the program down to this hardware yo
83 matches
Mail list logo