+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/16368#issuecomment-1893968676
You are receiving this because you are subscribed to this thread.
Message ID:
@mbaret I think "incubator process" needs to make sure the incubated project
gets support from the community. That's what they mentioned about "LLVM
umbrella" or "Apache umbrella". LLVM and Apache are large enough organizations
that they have dedicated mentors and more importantly, they attract
Please join us to welcome [@mbrookhart](https://github.com/mbrookhart) as a new
committer.
Matthew is one of the major authors of pattern language in Relay, which has
been one of the essential infrastructure tools and has made pattern matching
much easier. Matthew has been quite active contri
Please join us to welcome @mbrookhart as a new committer.
Matthew is one of the major authors of pattern language in Relay, which has
been one of the essential infrastructure tools and has made pattern matching
much easier. Matthew has been quite active contributing to non-recursive graph
visit
@tqchen @junrushao1994 how much of the work do you think we can reuse once we
move to https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm?
---
[Visit Topic](https://discuss.tvm.apache.org/t/rfc-cse-optimization/8130/4) to
respond.
You are receiving this because you ena
Thanks for the RFC. Nice finding and it looks good to me overall. Could you
also provide an example of how the user api looks like? Besides, what
information does Tensor Expression Tree provide in addition to Tensor?
---
[Visit Topic](https://discuss.tvm.apache.org/t/rfc-cse-optimization/8
+1, I have checked
Incubating in name
Verified signature and checksum
LICENSE checked
NOTICE checked
DISCLAIMER checked
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/6622#issu
Please join us to welcome Luis Vega
([@vegaluisjose](https://github.com/vegaluisjose)) as a new Committer. Luis is
one of the major authors of Chisel backend in VTA and has been contributing to
VTA actively. He has expertise in accelerator design and related compiler
techniques. Luis is also a
Please join us to welcome Luis Vega (@vegaluisjose) as a new Committer. Luis is
one of the major authors of Chisel backend in VTA and has been contributing to
VTA actively. He has expertise in accelerator design and related compiler
techniques. Luis is also active in reviewing and providing sugge
@tqchen @zhiics @ZihengJiang
You can view, comment on, or merge this pull request online at:
https://github.com/apache/incubator-tvm/pull/6573
-- Commit Summary --
* [Doc] Update release document
-- File Changes --
M docs/contribute/release_process.rst (10)
-- Patch Links --
https:/
[quote="wrongtest, post:3, topic:7960"]
If I have some common neural network structure such as resnet50 at hand, can I
just use autodiff to get backward computation graph?
[/quote]
graph-wise I think you can refer to
[relay.transform.gradient](https://github.com/apache/incubator-tvm/blob/master
Please join us to welcome @hypercubestart as a new reviewer. He has been
contributing to non-trivial optimizations in Relay. Andrew also actively review
PRs, anticipates and shares his thoughts in the discuss forum.
- [Commits
History](https://github.com/apache/incubator-tvm/commits?author=hype
For additional commands, e-mail: dev-h...@tvm.apache.org
> > >
> > >
> >
> > --
> > Byung-Gon Chun
> >
>
--
Yizhi Liu
Amazon Web Services
Vancouver, Canada
+1. feel great to be part of the team.
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/6299#issuecomment-676754622
@tqchen
You can view, comment on, or merge this pull request online at:
https://github.com/apache/incubator-tvm/pull/6091
-- Commit Summary --
* [Docs] improve the doc of release
-- File Changes --
M docs/contribute/release_process.rst (15)
-- Patch Links --
https://github.com/apach
mailing list (https://lists.apache.org/list.html?dev@tvm.apache.org)
- TVM website (https://tvm.apache.org/)
- Github issues (https://github.com/apache/incubator-tvm/issues)
Best regards,
Apache TVM (incubating) Team
--
Yizhi Liu
Closed #5972.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/5972#event-3535794032
Apache TVM (incubating) 0.6.1 is a maintenance release incorporating important
bug fixes and important performance improvements. All users of Apache TVM
(incubating) 0.6.0 are advised to upgrade. Please review following release
notes to learn the bug fixes.
# Bug Fixes
* Fixed process terminat
Closed #5947.
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/5947#event-3499773289
Thanks everyone for voting. The voting result has been sent out.
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/5947#issuecomment-652063557
Dear TVM community,
I'm glad to announce the results of the vote.
This vote passes with 12 +1 votes (9 binding), no 0 votes, and 0 -1 vote.
+1 votes
* Tianqi Chen (binding)
* Masahiro Masuda (binding)
* Lianmin Zheng (binding)
* Jared Roesch (binding)
* Thierry Moreau (binding)
* Ziheng Jiang (
Merged #5948 into master.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/pull/5948#event-3490019728
cc @tqchen
You can view, comment on, or merge this pull request online at:
https://github.com/apache/incubator-tvm/pull/5948
-- Commit Summary --
* [Doc] minor fix for release doc
-- File Changes --
M docs/contribute/release_process.rst (9)
-- Patch Links --
https://github.com/apach
Dear TVM community,
This is a call for vote to release Apache TVM (incubating) version 0.6.1. This
is a maintenance release incorporating important bug fixes. All users of Apache
TVM (incubating) 0.6.0 are advised to upgrade.
Link to release notes:
https://github.com/apache/incubator-tvm/re
Apache TVM (incubating) 0.6.1 is a maintenance release incorporating important
bug fixes and important performance improvements. All users of Apache TVM
(incubating) 0.6.0 are advised to upgrade. Please review following release
notes to learn the bug fixes.
# Bug Fixes
* Fixed process terminat
Closed #5939.
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/5939#event-3489387028
Thanks everyone. I'm going to create another release candidate and close this
vote.
--
You are receiving this because you commented.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/5939#issuecomment-650622511
Dear TVM community,
This is a call for vote to release Apache TVM (incubating) version 0.6.1. This
is a maintenance release incorporating important bug fixes. All users of Apache
TVM (incubating) 0.6.0 are advised to upgrade.
Link to release notes:
https://github.com/apache/incubator-tv
# Bug Fixes
* Fixed process termination routine in windows #4844
* [Runtime] Fix NDArray SaveDLTensor declaration and implementation signature
different #4586
* [NODE][Serialization]fix serialization precision loss in float #4503
* [Relay][Frontend][TF] fix _parse_param bug #4711
* Fix bias_add g
Here's a list of fixes we applied to v0.6 branch. I will cut a tag this Friday.
* Fixed process termination routine in windows #4844
* [Runtime] Fix NDArray SaveDLTensor declaration and implementation signature
different #4586
* [NODE][Serialization]fix serialization precision loss in float #45
Thanks for pointing out. I'll remove accordingly.
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-minor-bugfix-release-for-v0-6/6716/8) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/un
Here is a list of bug fixes we're going to apply on v0.6 branch, please let me
know if I missed anything.
* [RELAY] bugfix. #2215
* [Graph Tuner] Fix benchmark layout in graph tuner #3926
* [VTA] Parameterization and bug fix in TensorLoad module #3841
* [VTA] Fix TSIM compile error in Linux (ad
This is a proposal to do a minor (bugfix) release of v0.6, aka v0.6.1. Commits
will be cherry-picked to v0.6.1 branch. We follow the standard [Apache release
process](https://tvm.apache.org/docs/contribute/release_process.html)
I will go through the commits history to get a list of bug fixing
This is a good suggestion. If you find any bug fixes missing in our monthly dev
report, also feel free to point out, this would help realize the work in our
release notes later.
---
[Visit
Topic](https://discuss.tvm.ai/t/rfc-improve-pull-requests-with-respect-to-bug-fixes/6529/3)
to resp
@mbrookhart not at all, please feel free to proceed :)
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/3670#issuecomment-619263443
@tqchen
You can view, comment on, or merge this pull request online at:
https://github.com/apache/incubator-tvm/pull/5151
-- Commit Summary --
* [Doc] TVM release process
-- File Changes --
M docs/contribute/index.rst (1)
A docs/contribute/release_process.rst (186)
-- Patch Links
Please welcome Josh Fromm ( @jwfromm ) as a TVM reviewer. He has been working
on operator and frontend support and actively reviewing other people's
work. Josh did a good job help the project grows.
- [Commits
History](https://github.com/apache/incubator-tvm/commits?author=jwfromm)
- [Code
Rev
@MarisaKirisame could you elaborate "The jacobian of Y, W, should be dW times
jacobian Y W" ? not sure I correctly understand the symbol you use.
I think the main challenge is to infer the bound for Jacobian's axis, under the
scenario where output axes can be arbitrary linear combination of its i
@tqchen I have limited progress at this moment. might be able to revisit in a
month or so.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/3670#issuecomment-584813744
dk
> [2] https://aws.amazon.com/machine-learning/inferentia/
--
Yizhi Liu
-
To unsubscribe, e-mail: dev-unsubscr...@tvm.apache.org
For additional commands, e-mail: dev-h...@tvm.apache.org
@tqchen
You can view, comment on, or merge this pull request online at:
https://github.com/apache/incubator-tvm/pull/4558
-- Commit Summary --
* [NEWS] add v0.6 release
-- File Changes --
M NEWS.md (893)
-- Patch Links --
https://github.com/apache/incubator-tvm/pull/4558.patch
http
Thanks @tqchen @icemelon9 I'll change to use intrinsic and make it more general.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4487#issuecomment-564298170
## Motivation
As we have replaced `truncdiv`/`truncmod` by `floordiv`/`floormod` in most
places, there's a large demand for simplification to know the sign of the
expression. For example, knowing the tensor shape bound can help reduce the
if/else conditions significantly.
Here's an example of g
n Thu, Dec 5, 2019 at 1:42 PM Henry Saputra wrote:
>
> HI Yizhi,
>
> Could you please close the VOTE thread in general@ list by sending [RESULT]
> thread to summarize the tally of the release Vote
>
> Thanks!
>
> - Henry
>
> On Thu, Dec 5, 2019 at 8:29 AM YiZhi Liu w
infrastructure, communications, and decision making process have
stabilized in a manner consistent with other successful ASF
projects.
--
Yizhi Liu
-
To unsubscribe, e-mail: dev-unsubscr...@tvm.apache.org
For additional commands, e-mail
# New Features
### Relay in Production
Relay is a functional, differentiable programming language designed to be an
expressive intermediate representation for machine learning systems. Relay
supports algebraic data types, closures, control flow, and recursion, allowing
it to directly represent
Dear TVM community,
I'm glad to announce the results of the vote.
This vote passes with 14 +1 votes (4 binding), no 0 votes, and 0 -1 vote.
+1 votes
* Tianqi Chen (binding)
* Josh Fromm
* Yong Wu
* Haichen Shen (binding)
* Zhi Chen
* Logan Weber
* Junru Shao
* Jared Roesch
* MarisaKirisame
* Ne
Dear TVM community,
This is a call for vote to release Apache TVM (incubating) version 0.6.0. We
are thrilled to have a lot of exciting features be added since v0.5 early this
year.
1) Link to release notes:
https://github.com/apache/incubator-tvm/releases/tag/v0.6.0.rc2
2) Link to release ca
I have made another RC per IPMC's comments and includes the bug fix.
https://github.com/apache/incubator-tvm/releases/tag/v0.6.0.rc2
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issu
# New Features
### Relay in Production
Relay is a functional, differentiable programming language designed to be an
expressive intermediate representation for machine learning systems. Relay
supports algebraic data types, closures, control flow, and recursion, allowing
it to directly represent
@tqchen sounds good.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4406#issuecomment-558815123
@FrozenGene Thanks. Fixed just now. Also we fixed some license problem per
IPMC's comments. RC1:
https://github.com/apache/incubator-tvm/releases/tag/0.6.0.rc1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.
# New Features
### Relay in Production
Relay is a functional, differentiable programming language designed to be an
expressive intermediate representation for machine learning systems. Relay
supports algebraic data types, closures, control flow, and recursion, allowing
it to directly represent
@hsaputra , I remember the release artifact needs to be uploaded to apache dist
svn https://dist.apache.org/repos/dist/dev/incubator/ , is it still required? I
don't see tvm under the repository, shall we create one?
--
You are receiving this because you are subscribed to this thread.
Reply to
.h file was collected before its license be changed to
ALv2. Shall we say it is ALv2, or MIT?
And would you mind advise what was wrong with the following files?
6. /nnvm/include/nnvm/op.h - looks zlib is not used there?
7 ./src/schedule/bound.cc
On Sat, Nov 23, 2019 at 2:54 PM YiZhi Liu wrote
---------
> To unsubscribe, e-mail: dev-unsubscr...@tvm.apache.org
> For additional commands, e-mail: dev-h...@tvm.apache.org
>
--
Yizhi Liu
-
To unsubscribe, e-mail: dev-unsubscr...@tvm.apache.org
For additional commands, e-mail: dev-h...@tvm.apache.org
Dear Community,
This is an RFC for verifying whether v0.6.0rc0 is good to go. The release
candidate (release note, artifact and source code) can be find
[here](https://github.com/apache/incubator-tvm/releases/tag/0.6.0.rc0)
This is **not** an official vote. Please help us verifying,
* Incubati
@tqchen RFC after tag?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4259#issuecomment-557630301
The next steps are roughly,
1. Tag release candidate
2. Create binary artifact
3. Vote on dev@ mail-list (if fail, make change, go to step 1)
4. Vote on general@ mail-list (if fail, make change, go to step 1)
5. Finalize the release
--
You are receiving this because you are subscribed to this thr
Release note updated.
Also remove the unexpected characters Thanks @junrushao1994
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4259#issuecomment-557394390
I have created v0.6.0 branch. I'm about to update the release notes.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4259#issuecomment-557329901
@tqchen should we wait till #4378 is fixed?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4259#issuecomment-555853460
folks, is there anything need to be merged? I plan to cut the release branch
the end of the day.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4259#issuecomment-555736631
@vinx13 thanks. updated.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4259#issuecomment-554618589
I updated the release notes to include recent changes, as well as some known
issues as @u99127 suggested. please let me know if I missed anything.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/inc
I don't have access to some of the devices, I can update the others.
@merrymercy @tqchen what's your suggestion?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4259#issuecommen
@merrymercy would you mind summarize a bit what's the drawback of the original
implement, so we can learn from it.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/2954#issuecomm
@kevinthesun Thanks. I'll go through the merging history make sure everything
since the last monthly report gets into the release note.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/
Kindly remind that we would like to cut the release branch as well as the RC
tag (RC0) this week. If you want to get any feature into the v0.6 release,
please let us know. Once the release branch is cut, we _no longer_ take new
features. Bug fixes can still get in if _necessary_. @apache/tvm-com
@u99127 Good idea, will add.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4259#issuecomment-552540902
# Build and Test
* Increate the robuteness of CI test
([#2841](https://github.com/dmlc/tvm/pull/2841),
[#2798](https://github.com/dmlc/tvm/pull/2798),
[#2793](https://github.com/dmlc/tvm/pull/2793),
[#2788](https://github.com/dmlc/tvm/pull/2788),
[#2781](https://github.com/dmlc/tvm/pull/2781),
@tqchen Shall we print warning message when ppl use nnvm?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/4259#issuecomment-552027579
I updated the issue, please let us know if anything's missing.
(I didn't include "Build and Test" and "Bug fixes" section as the entire
content happens to be long enough that github prevents me from posting it.)
One thing we need to discuss is whether NNVM needs to be deprecated in v0.6
release
We will mainly refer to [Apache MXNet release
note](https://github.com/apache/incubator-mxnet/releases). I'll create a draft
today and tomorrow, everyone is welcomed to comment if anything should be
included is missing.
--
You are receiving this because you are subscribed to this thread.
Reply
Can user specify the similarity distance function?
would it be better to call it "share" rather than "ref", as "ref" reminds me of
"reference" in programming language.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https:/
# TVM Monthly - October 2019
https://discuss.tvm.ai/t/tvm-monthly-oct-2019/4587
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2623#issuecomment-549046494
I guess we will also need to change namespace `ml.dmlc.tvm` to `org.apache.tvm`
in tvm4j
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4212#issuecomment-547083016
+1 I think most orgs will be happy to show their logos in the webpage.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/4162#issuecomment-544300639
Merged #4036 into master.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/pull/4036#event-2674473102
Merged #3909 into master.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/pull/3909#event-2615867792
I believe @kevinthesun and @haichen are working on that.
---
[Visit
Topic](https://discuss.tvm.ai/t/whether-tvm-will-support-dynamic-shapes-in-the-future/3700/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](http
maybe I'm missing some context. would you mind give an example?
---
[Visit
Topic](https://discuss.tvm.ai/t/discussion-adding-a-function-to-relay-module-automatically-triggers-infertype/3643/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from t
I agree compile techniques can be used to optimize "add". and for long term
mxnet can adopt such optimization.
but let's focus on how to support current use case. It totally makes sense
that, because of the previous reason, we'd like to use option 1, while I'm
wondering whether it has any pro
This is the follow-up issue for
https://discuss.tvm.ai/t/rfc-functionality-of-alteroplayout-and-possible-refactoring/
To enhance the AlterOpLayout pass, I would like to propose 4 more passes to
replace current AlterOpLayout pass,
- [ ] Layout inference pass
To infer the layout of each layer.
+0.5 to floordiv given the familarity and the usage in isl and MLIR.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3478#issuecomment-508878526
include/tvm/include is to replace HalideIR/src/tvm/node and
include/tvm/ir is to replace HalideIR/src/ir
is it true?
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3474#issuecomment-508286
# TVM Monthly - June 2019
https://discuss.tvm.ai/t/tvm-monthly-june-2019
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2623#issuecomment-507917006
Closed #3390.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3390#event-2455283439
close by https://github.com/dmlc/tvm/pull/3389
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3390#issuecomment-507793738
Regarding how it would affect topi implementation of broadcast and reduce ops,
I think we can migrate current compute implement to this new approach, while we
might still need to know the shape beforehand in schedule.
I'll change the name to "auto_broadcast" and document the behavior more
preci
Move the discussion here
> The things to be improved are
* Document the behavior independent of arg_binder.
- Maps buffer[i][j][k] -> buffer[i][0][k] if dimension i's shape equals 0
* Debate on the name (auto broadcast?), enum vs string as type key
* Discuss how would the behavior affect topi im
We are trying to use TVM to generate operator definitions for MXNet.
The gap is, despite the fact that TVM compute/schedule can deal with symbolic
shape, some compute definition strongly rely on a fixed shape. For example,
broadcast ops,
```python
A = tvm.placeholder(shape=(tvm.var("a1"), tvm.v
We are glad to welcome @were as a new committer of TVM.
Jian is the major author of Hybrid Script for TVM. The tool enables people
write complicated compute logic in pure Python and then be transformed to TVM
tensor IR directly. It makes life much easier for
implementing operators like non-maxim
We are glad to welcome @eqy as a new PMC member of TVM.
As a committer, Eddie has contributed heavily to Relay, AutoTVM, Quantization,
TOPI, etc.
Eddie is also very active in reviewing as well as answering questions on the
discuss. Keeping the community active is the key to the project success.
Merged #3130 into master.
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/pull/3130#event-2313511822
I prefer not to do any layout conversion in frontend.
Just to clarify, tflite quantization has not been upstreamed, right?
@FrozenGene
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/2519
polyhedral optimization (or at least the ability to easily apply
polyhedral-like analysis) might be attractive for ASICs though, it could help
to build a smarter tensorizer.
---
[Visit Topic](https://discuss.tvm.ai/t/google-lasted-work-mlir-primer/1721/22)
to respond.
You are receiving t
@merrymercy Do you think this analysis design can be easily extended to be
working based on TVM Tensor AST (HalideIR) instead of ScheduleStage? Not urgent
but I think eventually we will make schedule primitives work on HalideIR, so
that we can unify the underlying data structure of schedule and
@merrymercy Could you elaborate a bit about the 4 types (simple reduction,
complex reduction, direct compute, and location-tunable compute) ? Also it
would be helpful if you can give an example of how the DAG looks like.
--
You are receiving this because you are subscribed to this thread.
Reply
`LOG(INFO) << oshape;` ?
---
[Visit
Topic](https://discuss.tvm.ai/t/how-to-debug-and-print-out-content-of-indexexpr/2039/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/b
100 matches
Mail list logo