Thanks @tqchen, at the moment the Relax flow would be out of scope for our
current use-cases, although we'd want to make sure this RFC doesn't introduce
obstacles for porting to the Relax flow in the future. Do you foresee any
blockers with the current approach, or could we consider merging?
--
Thanks for the discussion so far @tqchen, I added a small example detailing how
we're registering schedules for the Relay flow. I believe this will have
minimal impact for how the schedule might be used in a Relax based flow, but it
would be good to hear your thoughts.
--
Reply to this email d
Closing as superseded by: https://github.com/apache/tvm-rfcs/pull/104
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/18#issuecomment-1954039440
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #18.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/18#event-11860567122
You are receiving this because you are subscribed to this thread.
Message ID:
Got it, thanks @tqchen :) It sounds as though we're already doing something
similar by adding a tag in the compute definition to identify the block during
scheduling.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/107#issuecomment-1945716982
You ar
Thanks for taking a look @tqchen! Since scheduling will be completed with
TensorIR, it will provide the building blocks for being plugged into an
IRModule=>IRModule transformation pass. For our current use-case, it's
important to be able to fallback to previous optimizations in the form of TE
s
A RFC for enabling Scalable Matrix Extension code generation in TVM.
You can view, comment on, or merge this pull request online at:
https://github.com/apache/tvm-rfcs/pull/107
-- Commit Summary --
* [RFC] Scalable Matrix Extension enablement
-- File Changes --
A rfcs/0106-scalable-mat
A change that has not yet been included in the prototype was the predicate
representation on buffer loads/stores in TVMScript programs. This was briefly
referenced in the RFC:
https://github.com/apache/tvm-rfcs/pull/104/files#diff-6724c2a24eb34f7094b4ff2e8562f7812e6e22c8197f51792f4b5cdfa811fec4R
Regarding the changes required to support scalability in the data type, I've
been prototyping adding a new `scalable_` attribute to `DataType` that wraps
`DLDataType`.
However, I've ran into what I believe is an issue when accessing data types at
compile-time across the FFI boundary between pyt
Thanks @ashutosh-arm @NicolaLancellotti @leandron @neildhickey
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15747#issuecomment-1752560774
You are receiving this because you are subscribed to this thread.
Message ID:
Merged #15747 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15747#event-10588194263
You are receiving this because you are subscribed to this thread.
Message ID:
Closed #15346.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15346#event-1338323
You are receiving this because you are subscribed to this thread.
Message ID:
Closing in favour of https://github.com/apache/tvm/pull/15469, thanks @tqchen!
Let's pull the conda python version upgrade into a separate PR
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15346#issuecomment-1664181279
You are receiving this because you
I've tried reproducing the conda environment used for MacOS with this patch
checked out, although I've been unable to recreate the same failure. The latest
version of cython `3.0.0` is successfully installed.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pu
After updating the python version to 3.8, the same issues mentioned in
https://github.com/apache/tvm/pull/15346#issuecomment-1640409277 seem to persist
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15346#issuecomment-1641976374
You are receiving this be
Windows and MacOS builds are failing as they use cython==0.29.28 which is not
compatible with the new `noexcept` keyword. This functionality was added in
0.29.31
(https://github.com/cython/cython/blob/master/CHANGES.rst#02931-2022-07-27). We
could either upgrade the Windows and MacOS builds to
Cython `v3.0.0` was recently released
(https://github.com/cython/cython/releases/tag/3.0.0) and is used in newly
built docker images. This causes a compilation issue since 3.0.0 expects
function definitions to be explicitly declared with the `noexcept` annotation.
This change should be backward
Thanks @ashutosh-arm!
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15092#issuecomment-1592597165
You are receiving this because you are subscribed to this thread.
Message ID:
Merged #15092 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15092#event-9537316075
You are receiving this because you are subscribed to this thread.
Message ID:
Thanks @ashutosh-arm @NicolaLancellotti!
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15059#issuecomment-1586928480
You are receiving this because you are subscribed to this thread.
Message ID:
Merged #15059 into main.
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/pull/15059#event-9499825004
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/14129#issuecomment-1445998174
You are receiving this because you are subscribed to this thread.
Message ID:
+1, thanks @AndrewZhaoLuo!
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/13026#issuecomment-1274276016
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/12583#issuecomment-1231558359
You are receiving this because you are subscribed to this thread.
Message ID:
+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/11415#issuecomment-1140867800
You are receiving this because you are subscribed to this thread.
Message ID:
Thanks @MichaelJKlaiber, that makes sense. So I was wondering if this is the
case, perhaps in the future this interface is used by other backend's (not
accelerators) we would need to think about renaming UMA to something more
generic e.g. UMB _Universal Modular Backend_ - I'm not the best with n
## Motivation
[Arm Compute Library](https://github.com/ARM-software/ComputeLibrary) (ACL) is
an open source project that provides hand-crafted assembler routines for Arm
CPU's and GPU's. This integration will look at how we can accelerate CPU
performance for Arm devices in TVM using ACL. The
Thanks, I think this will be very useful. I think the benefit of this approach
is that it allows the run-time to be customized much more easily. I like the
idea of being able to cache an *engine* (in my case this will be a series of
ACL functions) - this opens up opportunity for optimization o
Yes that's correct :slight_smile:
---
[Visit
Topic](https://discuss.tvm.ai/t/relay-improved-graph-partitioning-algorithm/5830/20)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubsc
Hi, @aca88 I believe your example would be taken care of using the Merge
Composite pass before partitioning. You can imagine that after running this
pass add+conv2d for the blue compiler would be represented by a single node.
The partitioning would then happen as you described. However, this w
30 matches
Mail list logo