Re: [apache/tvm] [Release] v0.11.0 release schedule (Issue #13586)

2023-02-02 Thread Manupa Karunaratne
Looks like the GH branch protection is kicking in , kind of like : https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/about-protected-branches#require-pull-request-reviews-before-merging Alternatively, you could do

Re: [apache/tvm-rfcs] [RFC] Add Commit Message Guideline (PR #88)

2022-08-25 Thread Manupa Karunaratne
Ah ok, just noticing that it was missing in the RFC header -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/88#issuecomment-1227649858 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm-rfcs] [RFC] Add Commit Message Guideline (PR #88)

2022-08-25 Thread Manupa Karunaratne
Just a final remark : dont we need a tracking issue to track actual landing of the PR that adds pull_request.rst to docs ? -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/88#issuecomment-1227600044 You are receiving this because you are subscribed to

Re: [apache/tvm] [VOTE] Commit Messages RFC (Issue #12583)

2022-08-25 Thread Manupa Karunaratne
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/12583#issuecomment-1227595594 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm-rfcs] [USMP] Update RFC with constants pools (PR #81)

2022-06-23 Thread Manupa Karunaratne
cc : @areusch -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/81#issuecomment-1164214714 You are receiving this because you are subscribed to this thread. Message ID:

[apache/tvm-rfcs] [USMP] Update RFC with constants pools (PR #81)

2022-06-23 Thread Manupa Karunaratne
This commit introduces the notion of constant memory pools and removes the need to define access on each of the targets. You can view, comment on, or merge this pull request online at: https://github.com/apache/tvm-rfcs/pull/81 -- Commit Summary -- * [USMP] Update RFC with constants pools -

Re: [apache/tvm-rfcs] [RFC] Adding initial SVE implementation (#18)

2022-06-21 Thread Manupa Karunaratne
Hi @tqchen @kparzysz-quic @kparzysz-quic @masahi @tkonolige @smeijer1234 , We are looking to revive this work. I have gone through the thread. Summary so far is as follows : * We want to introduce/enhance a scheduling vectorization primitive that could be controlled by user/auto-tuner/auto-sche

Re: [apache/tvm-rfcs] [RFC] UMA Universal Modular Accelerator Interface (PR #60)

2022-05-17 Thread Manupa Karunaratne
> So a user would only write tvm.target.Target("ultra_trail -uma_attrs= custom attr string>") and in code you would access the target via > target.attrs["uma_attrs"]["attr1"], target.attrs["uma_attrs"]["attr2"], ect.? More or less yes -- maybe we could (re) use "mattr" instead of "uma_attrs" loo

Re: [apache/tvm-rfcs] [RFC] UMA Universal Modular Accelerator Interface (PR #60)

2022-05-17 Thread Manupa Karunaratne
``` They can be used during target creation similar to other sub_target strings ut_target = tvm.target.Target("ultra_trail -ultra_trail_attr_1=attr1 -ultra_trail_attr_2=attr2") ``` ``` This could probably be solved by adding type and/or default arguments to the argument parser, e.g.: self._re

Re: [apache/tvm-rfcs] [RFC] UMA Universal Modular Accelerator Interface (PR #60)

2022-05-11 Thread Manupa Karunaratne
For A1, My personal preference is that we go for Enum based approach rather than int s. Unless, of course, we have a good reason not to do that -- which I think should be outlined in the RFC for future reference. -- Reply to this email directly or view it on GitHub: https://github.com/apache/t

Re: [apache/tvm-rfcs] [RFC] UMA Universal Modular Accelerator Interface (PR #60)

2022-05-11 Thread Manupa Karunaratne
In order to help to progress with A2, I can think of two solutions : A2.1 : We could register a sub_target string (similiar to a mtriple in LLVM) and a decoder (e.g. : _register_sub_target_decoder(ultra_trail_target_decoder(str)) --> AttrDict/Dict. We could register the attr_dict as uma_attr

Re: [apache/tvm-rfcs] [RFC] UMA Universal Modular Accelerator Interface (PR #60)

2022-05-10 Thread Manupa Karunaratne
@areusch and @MichaelJKlaiber, I agree with using [Target-registered compilation flow customization](https://github.com/apache/tvm-rfcs/blob/main/rfcs/0010-target-registered-compiler-flow-customisation.md). I am struggling to how to connect that with : ``` TVM_REGISTER_GLOBAL("relay.backend.con

Re: [apache/tvm-rfcs] [RFC] UMA Universal Modular Accelerator Interface (PR #60)

2022-04-19 Thread Manupa Karunaratne
@MichaelJKlaiber, I did another pass. It is looking good! >From my side, Ideally, there are only two things that we need to agree upon on >the design (which relates two outstanding conversations above). A1. The rationale to use the int-based phase in registering passes. A2. The methodology to at

Re: [apache/tvm-rfcs] Collage RFC (PR #62)

2022-03-28 Thread Manupa Karunaratne
Hi @mbs-octoml , I think you may have missed the comments that is hidden by Github (happens to me all the time :) ). -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/62#issuecomment-1080393193 You are receiving this because you are subscribed to this

Re: [apache/tvm-rfcs] Collage RFC (PR #62)

2022-03-25 Thread Manupa Karunaratne
> No. As far as Collage is concerned it just calls the abstract > CostEstimator::Estimate interface for each candidate partition, and can > remain ignorant as to where those costs come from. In the prototype it is > hard coded to tune, build and run locally to help us get going. Here at > OctoM

Re: [apache/tvm-rfcs] Collage RFC (PR #62)

2022-03-24 Thread Manupa Karunaratne
> 1.) How can a user export the search done by collage ? i.e. similar to > loading tuning logs where ApplyBestHistory is done. Having thought about this more, If there can be a way to define a PartitionSpec that is tied with DFS IndexSet that could be exported and imported, that might work out.

[Apache TVM Discuss] [Development/pre-RFC] Commit Message Guideline

2022-03-18 Thread Manupa Karunaratne via Apache TVM Discuss
cc : @masahi @Lunderberg --- [Visit Topic](https://discuss.tvm.apache.org/t/commit-message-guideline/12334/12) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/3b0038

[Apache TVM Discuss] [Development/pre-RFC] Commit Message Guideline

2022-03-18 Thread Manupa Karunaratne via Apache TVM Discuss
Thanks @gromero for taking this initiative. I would actually push us to take a pragmatic route to enforce these (kind of agreeing @driazati ) given the distributed nature of the TVM/OSS project, failing that we fallback to being at least a "guideline" -- which we dont have at the minute :) .

Re: [apache/tvm-rfcs] [RFC] UMA Universal Modular Accelerator Interface (PR #60)

2022-03-15 Thread Manupa Karunaratne
@cgerum thanks for detailed analysis! Im wondering whether should we provide an optional partitioning hook as well -- so then it can be anything and let the default be a Sequential of MergeComposite, AnnotateTarget, MergeCompilerRegions, ParititionGraph. WDYT ? -- Reply to this email directly

[Apache TVM Discuss] [Development/pre-RFC] [RFC] Rebuild Docker images per commit

2022-03-10 Thread Manupa Karunaratne via Apache TVM Discuss
@driazati @leandron , I think this proposal will benefit all the work that require updates to dependencies. @masahi @Leo-arm @elenkalda-arm I would suggest lets scope scripts that is relevant to this proposal (as it seems there are already other places the attackers could exploit anyway) .

Re: [apache/tvm] [VOTE] Replace codeowners with more relevant automation (Issue #10471)

2022-03-07 Thread Manupa Karunaratne
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/10471#issuecomment-1060839292 You are receiving this because you are subscribed to this thread. Message ID:

[Apache TVM Discuss] [Development] Problem with FuseOps (and embedded constants in TIR)

2022-02-25 Thread Manupa Karunaratne via Apache TVM Discuss
@kparzysz . As mentioned in the PR, the above reference is about scalar constants, that is not subject to link-params. (Correct me if I am wrong -- @dmitriy-arm ). #8509 is about non-scalar constants. One option is to hexagon backend needs to be adjusted to handle AllocateConst nodes, instea

[Apache TVM Discuss] [Development] Problem with FuseOps (and embedded constants in TIR)

2022-02-25 Thread Manupa Karunaratne via Apache TVM Discuss
Hi @kparzysz , Sorry to hear that there was downstream failure because of #8509. [quote="kparzysz, post:1, topic:12165"] Float16 constants are not supported by constants in TIR and compilation aborts. [/quote] [quote="kparzysz, post:1, topic:12165"] Constants that are not parameters cannot hav

[Apache TVM Discuss] [Development/pre-RFC] [RFC] Remove CODEOWNERS

2022-02-24 Thread Manupa Karunaratne via Apache TVM Discuss
I see. All good then :D --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-remove-codeowners/12095/11) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.apache.org/email/unsubscribe/e723fbd60a6

[Apache TVM Discuss] [Development/pre-RFC] [RFC] Remove CODEOWNERS

2022-02-24 Thread Manupa Karunaratne via Apache TVM Discuss
@driazati @areusch , This looks like a great suggestion!. I think the proposal is about adding a mechanism to use the cc tag to attach people as reviewers, which seems good step. I agree with @comaniac by helping the new authors finding people to tag for reviews rather than doing a mandator

[Apache TVM Discuss] [Development/pre-RFC] [RFC] UMA: Universal Modular Accelerator Interface

2022-02-24 Thread Manupa Karunaratne via Apache TVM Discuss
Hi @MJKlaiber , Apologies to for not getting back to this in time. Thanks for the proposal! and it broadly looks like wrapping the Target Hooks RFC (by @Mousius ) : https://github.com/apache/tvm-rfcs/blob/main/rfcs/0010-target-registered-compiler-flow-customisation.md, and exposing a nice/str

[Apache TVM Discuss] [Development/pre-RFC] [RFC] Rebuild Docker images per commit

2022-02-14 Thread Manupa Karunaratne via Apache TVM Discuss
Hi @driazati , I would support this. This is a great improvement as this would always verify the patches in the environment where they are meant to be verified -- without having to merge docker changes first and then running docker-staging job with the other changes. @Mousius what do you thi

Re: [apache/tvm] [RFC][Tracking Issue] Arm® Ethos™-U Integration (#8482)

2021-11-24 Thread Manupa Karunaratne
Closed #8482. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/8482#event-5668323234

[Apache TVM Discuss] [Development/pre-RFC] [pre-RFC] Compilation Configuration Representation

2021-11-04 Thread Manupa Karunaratne via Apache TVM Discuss
Thanks for the interesting discussion. @tqchen @junrushao1994 , In terms of the definition of the target, I see two categories of arguments presented here : C1 : The executor, runtime, should belong to the target -- even if means duplication. C2 : The targets should be hierarchical and re

[Apache TVM Discuss] [Development/pre-RFC] [pre-RFC] Compilation Configuration Representation

2021-11-03 Thread Manupa Karunaratne via Apache TVM Discuss
Hi @tqchen and @zxybach, cc : @mbaret What is a Composite Target ? TVM being a multi-target compiler, it would be a bit confusing to use a Array of Targets as another Composite Target -- I think its the terminology what is confusing here. A composite target sounds like a target that codegen

[Apache TVM Discuss] [Development] [BYOC, CUTLASS] Dealing with Constants in C source-gen based BYOC

2021-11-01 Thread Manupa Karunaratne via Apache TVM Discuss
@masahi There is another option you could take here. The wildcard() actually works here because the constant remains in @main function of the IRModule. In the partition_for_* function where the full IRModule is visible (along with @main and external functions) you could actually mutate the c

Re: [apache/tvm-rfcs] [RFC] TVM Unified Static Memory Planning (#9)

2021-10-04 Thread Manupa Karunaratne
@electriclilies -- sorry I missed your suggestions -- corrected them now. @areusch -- addressed the comments now. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/9#issuecomment-9335467

Re: [apache/tvm-rfcs] [RFC][TIR] Adding annotation field to tir.allocate (#23)

2021-10-01 Thread Manupa Karunaratne
@tqchen shall we get this in then? -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/23#issuecomment-932148628

Re: [apache/tvm-rfcs] [RFC] Improved multi-target handling (#38)

2021-10-01 Thread Manupa Karunaratne
Hi @mbs-octoml , I may have put a related comment here : https://github.com/apache/tvm/pull/8892#issuecomment-932020564 However, partitioning for devices of same kind is a step forward from unifying the BYOC and Device annotations. Is this the RFC intended to cover these all ? -- You are rece

Re: [apache/tvm-rfcs] [RFC][TIR] Adding annotation field to tir.allocate (#23)

2021-09-30 Thread Manupa Karunaratne
@tqchen thanks for spotting that! did the change. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/23#issuecomment-931391386

Re: [apache/tvm-rfcs] [RFC][TIR] TIR Non-scalar Constants (#22)

2021-09-29 Thread Manupa Karunaratne
@d-smirnov -- I think the design is stable (just waiting on @areusch), shall we look to update the PRs : * https://github.com/apache/tvm/pull/8472 * https://github.com/apache/tvm/pull/8509 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view

Re: [apache/tvm-rfcs] [RFC][TIR] Adding annotation field to tir.allocate (#23)

2021-09-29 Thread Manupa Karunaratne
@tqchen merge? -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/23#issuecomment-930796849

Re: [apache/tvm-rfcs] [RFC][TIR] TIR Non-scalar Constants (#22)

2021-09-29 Thread Manupa Karunaratne
@areusch a friendly ping! -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/22#issuecomment-930796609

Re: [apache/tvm-rfcs] [RFC][TIR] TIR Non-scalar Constants (#22)

2021-09-27 Thread Manupa Karunaratne
@areusch @junrushao1994 I have added a section to say how constants are added to the IRModule, now. Summary : The storage of constants in the IRModule, will be in "Constants" attribute as Array\ Basically, if the tir.allocate_const node is created first, then the PrimFunc and lastly if it get

Re: [apache/tvm-rfcs] [RFC] TVM Unified Static Memory Planning (#9)

2021-09-27 Thread Manupa Karunaratne
Hi @areusch , I have addressed candidate_memory_pool query now. For you question around fallback : > where are the "fallback" candidate_memory_pools passed in to the runtime? The fallback only happens the in the compilation time as per this RFC. Therefore, by the time USMP is done, one pool w

Re: [apache/tvm] [VOTE] Adopt round-robin assignment of reviewers for GitHub pull request reviewer assignment. (#9057)

2021-09-27 Thread Manupa Karunaratne
-1 It feels like the wrong solution to a valid problem. The reason I object would be for mainly two reasons as follows : 1) The round-robin assignment could miss out interested reviewers. Giving everyone (who have working and contributed the specific component) the opportunity for review, IM

Re: [apache/tvm-rfcs] [RFC][TIR] Adding annotation field to tir.allocate (#23)

2021-09-27 Thread Manupa Karunaratne
@areusch @tqchen , Could we agree to move on with using the annotations instead of AttrStmt? -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/23#issuecomment-927950153

Re: [apache/tvm-rfcs] [RFC][TIR] TIR Non-scalar Constants (#22)

2021-09-22 Thread Manupa Karunaratne
Since we have relay.Constants having the same need to parse in constants, it would be appreciated not to block progress on deciding on the mechanics of parsing in NDArrays in to IRModule. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view

Re: [apache/tvm-rfcs] [RFC][TIR] TIR Non-scalar Constants (#22)

2021-09-20 Thread Manupa Karunaratne
Hi @junrushao1994 , We have discussed this internally and we find referring to NDArrays in IRModule attributes seems reasonable through tir.allocate_const nodes. I'll do a pass to modify the text in the RFC. @d-smirnov, any thought from you ? -- I think we will have to store the NDArrays as IR

Re: [apache/tvm-rfcs] [RFC][TIR] Adding annotation field to tir.allocate (#23)

2021-09-20 Thread Manupa Karunaratne
Thanks @junrushao1994 , it would be great if we could finalize the RFC here because it has a cascading effect to the USMP RFC and certain PRs waiting on it. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/

Re: [apache/tvm-rfcs] [RFC] TVM Unified Static Memory Planning (#9)

2021-09-19 Thread Manupa Karunaratne
@areusch I got some cycles to spend on this. I've updated the RFC addressing your comments and reflecting the changes discussed here as well : https://github.com/apache/tvm-rfcs/blob/c447cbfbd5abceaa7623a0f90cc492784e6f0c0b/rfcs/0023-adding-annotation-field-to-tir.allocate.md. PTAL when you get

Re: [apache/tvm-rfcs] [RFC][TIR] Adding annotation field to tir.allocate (#23)

2021-09-19 Thread Manupa Karunaratne
Done -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/23#issuecomment-922667261

Re: [apache/tvm-rfcs] [RFC][TIR] TIR Non-scalar Constants (#22)

2021-09-16 Thread Manupa Karunaratne
@junrushao1994 , For A.) The reason is constants represent something intimate to the compute and requiring space. Moreover, in the scheduling passes where we want to do slicing in loops where the weights get sliced and need to undergo transformations (e.g. compression), it will need to keep on

Re: [apache/tvm-rfcs] [RFC][TIR] TIR Non-scalar Constants (#22)

2021-09-16 Thread Manupa Karunaratne
@junrushao1994 has followed up with individual comments and its pending my response :) I ll do it next once I get some free cycles. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/22#

Re: [apache/tvm-rfcs] [RFC][TIR] TIR Pinned Memory Representation (#23)

2021-09-16 Thread Manupa Karunaratne
@tqchen, can you take a look? I can modify the USMP RFC to reflect the decisions made here, if we could finalize the discussion here. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/2

Re: [apache/tvm-rfcs] [RFC][TIR] TIR Pinned Memory Representation (#23)

2021-09-08 Thread Manupa Karunaratne
Hi all, Sorry for delay! I've managed to update the text to reflect the discussion. PTAL and if its good, shall we get this in ? -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/23#is

Re: [apache/tvm-rfcs] [RFC][TIR] TIR Pinned Memory Representation (#23)

2021-08-31 Thread Manupa Karunaratne
Cool; I ll adjust the RFC text tommorow. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/23#issuecomment-909435339

Re: [apache/tvm-rfcs] [RFC][TIR] TIR Pinned Memory Representation (#23)

2021-08-31 Thread Manupa Karunaratne
Thanks @tqchen , yes we could work this for now. I take it that for P2, we could use the tag of storage_scope as well, then ? -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/23#issueco

Re: [apache/tvm-rfcs] [RFC][TIR] TIR Pinned Memory Representation (#23)

2021-08-31 Thread Manupa Karunaratne
Hi All, Thanks for taking the time to look at it. Initially, my thoughts were to use the same field for two purposes : P1) Indicate candidate memories (a.k.a. pools) that each allocate be associated with P2) After the memory planner is run, it will pick one out of the list by reducing the candi

Re: [apache/tvm] [RFC][Tracking Issue] TensorIR Scheduling (#7527)

2021-08-31 Thread Manupa Karunaratne
Thanks @junrushao1994. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/7527#issuecomment-909129132

Re: [apache/tvm] [RFC][Tracking Issue] TensorIR Scheduling (#7527)

2021-08-27 Thread Manupa Karunaratne
Ack, Many thanks for the info 🙂! -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/7527#issuecomment-907200410

Re: [apache/tvm] [RFC][Tracking Issue] TensorIR Scheduling (#7527)

2021-08-27 Thread Manupa Karunaratne
Hey @junrushao1994 , Thanks for the clarifications. Since the relay integration is supposed to be happening prior to meta schedule is being concluded, what would be the default 'schedule' (or maybe in the context of TensorIR: default set of scheduling passes) used in a relay.build flow ? --

Re: [apache/tvm] [RFC][Tracking Issue] TensorIR Scheduling (#7527)

2021-08-27 Thread Manupa Karunaratne
Ack. Thanks. Out of curiosity for the planned relay integration, * Will the integration be using #7987 ? * If you guys have decided, please let us know what other the APIs (at least initially) be used to create the high-level non scheduled Primfunc? * Will it include rewriting schedules in TOPI

Re: [apache/tvm] [RFC][Tracking Issue] TensorIR Scheduling (#7527)

2021-08-26 Thread Manupa Karunaratne
Hi @junrushao1994 @Hzfengsy , Thanks for the effort that goes in here. Some of this scheduling primitives are really useful that are not there in TE. Will there be a task to integrate these scheduling primitives to be used by main compilation flow? (i.e. relay.build) ? -- You are receiving th

Re: [apache/tvm-rfcs] [RFC] Arm Ethos-U Integration (#11)

2021-08-23 Thread Manupa Karunaratne
Hi all, I think all the comments are addressed now. Waiting for approval or comments. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/11#issuecomment-903921134

Re: [apache/tvm-rfcs] [RFC][TIR] TIR Pinned Memory Representation (#23)

2021-08-17 Thread Manupa Karunaratne
cc : @tqchen @junrushao1994 @jroesch @areusch -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/23#issuecomment-900479246

[apache/tvm-rfcs] [RFC][TIR] TIR Pinned Memory Representation (#23)

2021-08-17 Thread Manupa Karunaratne
* adding the markdown Change-Id: I6980d8f9a228ff5e8a79d74220db9b77f88a3e1b You can view, comment on, or merge this pull request online at: https://github.com/apache/tvm-rfcs/pull/23 -- Commit Summary -- * [RFC][TIR] TIR Pinned Memory Representation -- File Changes -- A rfcs/000x-assoc

Re: [apache/tvm-rfcs] [RFC][TIR] TIR Non-scalar Constants (#22)

2021-08-17 Thread Manupa Karunaratne
cc : @tqchen @junrushao1994 @jroesch @areusch -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/22#issuecomment-900469754

[apache/tvm-rfcs] [RFC][TIR] TIR Non-scalar Constants (#22)

2021-08-17 Thread Manupa Karunaratne
* added the markdown * added a commit msg header Change-Id: I0fb3e6b97242ba219c157c9abe5184f14a9f8eff You can view, comment on, or merge this pull request online at: https://github.com/apache/tvm-rfcs/pull/22 -- Commit Summary -- * [RFC][TIR] TIR Non-scalar Constants -- File Changes --

Re: [apache/tvm-rfcs] [RFC] Arm Ethos-U Integration (#11)

2021-08-12 Thread Manupa Karunaratne
Sorry for the delay! I addressed your comments and anwsered the questions. @areusch -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/11#issuecomment-897621033

Re: [apache/tvm-rfcs] Additional Target Hooks RFC (#10)

2021-08-04 Thread Manupa Karunaratne
Hi @jroesch @Mousius , I think having Pass'es (instead of functions) could also work. I guess what we are after is the ability to assemble and re-use TVM's compilation passes on a target basis. https://github.com/apache/tvm/pull/8110#discussion_r639559530 Continuing the above comment, this also

[apache/tvm] [RFC][Tracking Issue] Arm® Ethos™-U Integration (#8482)

2021-07-15 Thread Manupa Karunaratne
This issue to track upstreaming progress for Arm® Ethos™-U integration. - [ ] P1: The ci_cpu Dockerfile changes and install scripts – Arm® Corestone™-300 FVP and Ethos™-U core driver - [ ] P2: The Relay passes with unit tests for Conv2D (Partitioning, Preprocessing and Legalization) - [ ] P3:

[apache/tvm-rfcs] [RFC] Arm Ethos-U Integration (#11)

2021-07-15 Thread Manupa Karunaratne
[uTVM] This commit adds markdown for Arm Ethos-U Integration into TVM along with the diagrams used within the RFC. cc : @areusch @mbaret @tqchen You can view, comment on, or merge this pull request online at: https://github.com/apache/tvm-rfcs/pull/11 -- Commit Summary -- * Arm Ethos-U In

Re: [apache/tvm-rfcs] [RFC] TVM Unified Static Memory Planning (#9)

2021-07-06 Thread Manupa Karunaratne
Thanks @tkonolige for taking a look at this. As per the pre-RFC discussion, There is not anything preventing us from integrating unified static memory planner (apart from doing actual work for that of course :) ) for graph executor as long as relay "main" function is lowered to TIR before crea

Re: [apache/tvm-rfcs] [RFC] TVM Unified Static Memory Planning (#9)

2021-07-06 Thread Manupa Karunaratne
cc : @areusch @mbaret @tqchen (I cant seem to tag the original people in the RFC -- working on it) -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm-rfcs/pull/9#issuecomment-874706027

[apache/tvm-rfcs] [RFC] TVM Unified Static Memory Planning (#9)

2021-07-06 Thread Manupa Karunaratne
This commits adds the RFC (.md) for USMP. pre-RFC on discuss : https://discuss.tvm.apache.org/t/rfc-unified-static-memory-planning/10099 You can view, comment on, or merge this pull request online at: https://github.com/apache/tvm-rfcs/pull/9 -- Commit Summary -- * [RFC] TVM Unified Sta

[apache/tvm] [RFC] Unified Static Memory Planning (USMP) Tracking Issue (#8404)

2021-07-05 Thread Manupa Karunaratne
This is the tracking issue for the changes proposed in the USMP [RFC](https://discuss.tvm.apache.org/t/rfc-unified-static-memory-planning/10099). # Steps - [ ] Introduction and integration of tir.allocate_const (similiar to tir.allocate). The integration would be a refactor of current link-para

[Apache TVM Discuss] [Development/RFC] [RFC] Unified Static Memory Planning

2021-05-26 Thread Manupa Karunaratne via Apache TVM Discuss
# Background Currently, given a ML model primarily TVM will generate two main artifacts : * A1 : Description of the sequential execution of operators : 1. If the "executor" is "graph", this would be a JSON 2. if the "executor" is "aot", this would be a main function describing call graph o

[Apache TVM Discuss] [Development/RFC] [RFC] tlcpack: Thirdparty Binary Packages

2021-05-18 Thread Manupa Karunaratne via Apache TVM Discuss
Hi @tqchen, Im not sure whether that requires different namespaces for the packages. Why cant we use something as follows : * Released versions: e.g.: tlcpack-0.8 tlcpack-0.8.1 tlcpack-0.9 * Pre-release versions instead of tlcpack_nightly: tlcpack-0.10.devXXX --- [Visit Topic](https://

[Apache TVM Discuss] [Development/RFC] mini-RFC] Name mangling in AOT

2021-05-11 Thread Manupa Karunaratne via Apache TVM Discuss
Hi @areusch @tqchen @giuseros I think its best to use _tvm prefix nonetheless. -- so we dont pollute a namespace based on a user given variable. I dont follow why a "prefix" necessarily mean user being able to select it? If "prefix" is not the right term we should not call it a prefix. The g

[Apache TVM Discuss] [Development/RFC] [RFC][uTVM] Query intermediary workspace requirement

2021-04-15 Thread Manupa Karunaratne via Apache TVM Discuss
[quote="areusch, post:4, topic:9643"] The main question I have though is: if we are just going to hoist tensors out of operator implementations, why do we need to have a way to lookup PrimFunc workspace size? Can’t we just get that by looking at the arguments? [/quote] We think they will get h

[Apache TVM Discuss] [Development] Duplication of the driver between C++ and Python

2021-04-14 Thread Manupa Karunaratne via Apache TVM Discuss
Yes, this part had been a pain point in figuring out which part of the compilation pipeline is being run. Regarding, the *lower*, I think C++ version is not run (maybe not anywhere in the tvm compilation -- correct me if I am wrong) at the minute because there is a check for the registered p

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2021-02-17 Thread Manupa Karunaratne via Apache TVM Discuss
On a side note to this conversation about new primitives, would the new TensorIR will include primitive "store_at" -- the one present in Halide/Tiramisu ? -- I just want to know if thats something in the roadmap :slight_smile: . --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-tens

[Apache TVM Discuss] [Development/RFC] [RFC] [µTVM] Model Library Format

2021-02-15 Thread Manupa Karunaratne via Apache TVM Discuss
So generally BYOC caters to two types of use cases that are mostly to handle accelerators and optimized operator libraries (e.g., Arm Compute Library, DNNL). I think in the world of micro, both of these should be invoked in the target_host via Driver/Runtime API component. i.e., even though th

[Apache TVM Discuss] [Development/RFC] [RFC] [µTVM] Model Library Format

2021-02-15 Thread Manupa Karunaratne via Apache TVM Discuss
Hi @areusch , Thanks for taking time to put this all up. Overall it makes sense to me. > > (TODO, but not as a result of this RFC) Group the non-host modules by > target_type (except that ext_dev target_types should be expanded to a unique > key per BYOC). Save each generated module into a fi

[Apache TVM Discuss] [Development/RFC] [RFC] TVMC: Add support for µTVM

2021-02-08 Thread Manupa Karunaratne via Apache TVM Discuss
Hi @gromero and @areusch , Interesting discussions! > The only thing I see specific to that use-case is that it has an runtime > “adapter” for TVM C and C++ interfaces (in bundle.c and bundle.cc) that will > be used by the application and will be linked by an ad hoc (per project or > applica

[Apache TVM Discuss] [Development/RFC] [RFC] 'Cascade' Scheduling

2020-10-14 Thread Manupa Karunaratne via Apache TVM Discuss
That is a good suggestion! @tqchen and it presents a good compromise for us to see the complexities and opportunities that would open up by being able to express the multi-op TIR blocks (hierarchical TIR blocks ?). --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-cascade-scheduling/8

[Apache TVM Discuss] [Development/RFC] [RFC] 'Cascade' Scheduling

2020-10-12 Thread Manupa Karunaratne via Apache TVM Discuss
@tqchen, So this makes me wonder -- what are the exact reasons that we need to maintain the relay abstraction until upto the graph runtime ? As @matt-arm mentions, I quite like the idea of making fuse-ops a TIR (the improved one with blocks) pass because currently its forward guessing the sema

[Apache TVM Discuss] [Development] Creating store_at in TVM

2020-10-05 Thread Manupa Karunaratne via Apache TVM Discuss
Yes definitely useful to have! might save a lot of hacks/workarounds that would otherwise needed to get the same functionality. Also cc : @spectrometerHBH @merrymercy --- [Visit Topic](https://discuss.tvm.apache.org/t/creating-store-at-in-tvm/8083/2) to respond. You are receiving this be

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-10-01 Thread Manupa Karunaratne via Apache TVM Discuss
Yes, the ambiguity is something I was struggling with too, when having a conversation. May I ask what does the "T" of old TIR stands for ? TVM ? --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/40) to respond. You are receiving this because y

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-15 Thread Manupa Karunaratne via Apache TVM Discuss
Thanks for the clarification! I concur that such a primitive should be useful and would allow more flexible compute movements. Regarding the full graph, I agree that relay (along with optimization) being very useful. I was thinking whether there would be a benefit of lowering the full graph t

[Apache TVM Discuss] [Development/RFC] [RFC] TensorIR: A schedulable IR for TVM

2020-09-15 Thread Manupa Karunaratne via Apache TVM Discuss
Thanks for the proposal! Looks quite interesting! Out of curiosity, 1) The concat example you've shown where the original stage is represented in three blocks that seems to be assigning to the same buffer. I'm curious to know what if we want to move the concat (using compute_at, if possible

[TVM Discuss] [Development/RFC] [RFC][µTVM] Standalone µTVM Roadmap

2020-06-16 Thread Manupa Karunaratne via TVM Discuss
Thanks for the RFC @areusch -- especially posting this ahead of the meetup. I am trying to understand some bits around the subgoal of BYO memory allocator. Would you be able to elaborate more on this ? (as to is this about allocating tensors with memory blocks/regions/addresses ? If so are we

[TVM Discuss] [Development/RFC] [RFC] [ETHOSN] Arm Ethos-N integration

2020-05-18 Thread Manupa Karunaratne via TVM Discuss
@comaniac, I believe when you are saying you are going to re-write the merge-composite with the pattern language that means you are essentially going to replace the pattern tables with patterns from the language (as long as the interface is concerned). Correct ? I also dont see any problems

[TVM Discuss] [Development/RFC] [Relay] Improved graph partitioning algorithm

2020-03-26 Thread Manupa Karunaratne via TVM Discuss
@tico Have a look at our proposal : https://discuss.tvm.ai/t/rfc-byoc-an-extended-graph-partitioning-flow/6028. I think the "optimal" graph partitioning/mapping could be easily plugged in as the "De-conflict" step that we are proposing (Although, we are currently proposing a generic simple gr