Re: [apache/tvm] [VOTE] Issue Triage Workflow RFC (Issue #12743)

2022-09-08 Thread Tristan Konolige
+1 -- Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/12743#issuecomment-1241259282 You are receiving this because you are subscribed to this thread. Message ID:

Re: [apache/tvm-rfcs] [RFC] Add Commit Message Guideline (PR #88)

2022-08-16 Thread Tristan Konolige
@gromero I think I am getting a little confused at the difference between messages in the commits composing the PR and the final commit in main. To make things clearer, I think it would help to refer to to commit title and commit message as PR title and PR description, respectively. PR title and

Re: [apache/tvm-rfcs] [RFC] TVMScript Metaprogramming (PR #79)

2022-08-03 Thread Tristan Konolige
Ok, I think I understand things a little bit better now. Thanks for the explanation! I can see how if the IRBuilder handles all state then there is not really much of a point to having classes for the parser. It might be worthwhile to mention having a stateful parser class as an alternative in t

Re: [apache/tvm-rfcs] [RFC] TVMScript Metaprogramming (PR #79)

2022-07-29 Thread Tristan Konolige
I'm still a bit confused on 2. The example you give for a single class is one that has all static members, but what if instead it was just a regular class. It would instantiate a single instance of said class to do parsing. This is what relax is doing (https://github.com/tlc-pack/relax/blob/25c

[Apache TVM Discuss] [Development/pre-RFC] Commit Message Guideline

2022-03-17 Thread Tristan Konolige via Apache TVM Discuss
> OTOH overt time I did realize that is not always the case that contributors > void push force and eventually some will do it. BTW, we don’t have any rule / > guideline to encourage (or dis-encourage it). Let me know if besides keeping > the conversations in the PR (which is ultimately a GH l

[Apache TVM Discuss] [Development] Sparse OpenCL error: scheduling sparse computations that use tir.ir_builder

2021-08-16 Thread Tristan Konolige via Apache TVM Discuss
Meta scheduler (autotir) has not been full merged yet. I believe it will work on sparse workloads, but I don't know how well it compares to the hand written kernels. --- [Visit Topic](https://discuss.tvm.apache.org/t/sparse-opencl-error-scheduling-sparse-computations-that-use-tir-ir-build

[Apache TVM Discuss] [Development/pre-RFC] [RFC] Meta Schedule (AutoTensorIR)

2021-06-07 Thread Tristan Konolige via Apache TVM Discuss
Should we be using the formal RFC process for this? (Submitting this RFC as a PR to the tvm-rfcs repo). --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-meta-schedule-autotensorir/10120/5) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe fro

Re: [apache/tvm] [VOTE] Adopt the New RFC Process (#7991)

2021-05-06 Thread Tristan Konolige
+1 -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/apache/tvm/issues/7991#issuecomment-833770287

[Apache TVM Discuss] [Development/RFC] Pass Instrument Framework Proposal

2021-05-03 Thread Tristan Konolige via Apache TVM Discuss
Thanks for the PR @zackcquic. 1. Why would you like to keep runtime profiling and pass profiling separate? The benefit I see is that a lot of the code is similar. We could avoid a lot of code duplication. On the other hand runtime profiling has does have a lot of code around handling timing o

[Apache TVM Discuss] [Development] Quantization and 3D convolution

2020-11-16 Thread Tristan Konolige via Apache TVM Discuss
I can reproduce it now. To me it looks like a bug in scheduling. Maybe @tqchen knows why this is happening? --- [Visit Topic](https://discuss.tvm.apache.org/t/quantization-and-3d-convolution/8338/10) to respond. You are receiving this because you enabled mailing list mode. To unsubscrib

[Apache TVM Discuss] [Development] Quantization and 3D convolution

2020-11-13 Thread Tristan Konolige via Apache TVM Discuss
I believe this line is the issue as it occurs before `threadIdx.z` is defined. [quote="OValery16, post:6, topic:8338"] `allocate(compute, int32, [(((floordiv(((threadIdx.z: int32*2) + 1), 4)*32) + 32) - (floordiv(threadIdx.z, 2)*32))]);` [/quote] However, I cannot reproduce this issue with the

[Apache TVM Discuss] [Development/RFC] [RFC] A general task extraction mechanism for auto_scheduler

2020-11-13 Thread Tristan Konolige via Apache TVM Discuss
I'm not super familiar with autotvm and auto scheduling, but I've got a couple questions: 1. What is the interaction between autoscheduler and autotvm in the future. Will we be unifying the user api for autotvm and auto scheduling? Can you mix auto scheduling and autotvm? 2. Why is the `GraphR

[Apache TVM Discuss] [Development/RFC] Expand Span for imported module

2020-11-11 Thread Tristan Konolige via Apache TVM Discuss
@jroesch and I were talking about this a little. We were thinking of subclassing Span. You'd have SourceSpan which comes from files and then ModelSpan (probably could use a better name) for handling layers/nodes in models. This gets around the issue of having meaningless line fields for spans

[Apache TVM Discuss] [Development] Quantization and 3D convolution

2020-11-10 Thread Tristan Konolige via Apache TVM Discuss
Could you print out the lowered code? You can use `tvm.lower(s, args)` where `s` is the schedule. Also, if you provide a minimal example to run, I can take a look at it. --- [Visit Topic](https://discuss.tvm.apache.org/t/quantization-and-3d-convolution/8338/5) to respond. You are receiv

[Apache TVM Discuss] [Development] Quantization and 3D convolution

2020-11-05 Thread Tristan Konolige via Apache TVM Discuss
Hello @OValery16, I believe the issue you are encountering is that you are calling `te.thread_axis("threadIdx.z")` multiple times. Instead, can you try creating the thread axis once with `thread_z = te.thread_axis("threadIdx.y")` and then use it like so: `s[output].bind(s[output].fuse(tf, td),

[Apache TVM Discuss] [Development/RFC] [RFC] Consolidating TVM Python Dependencies

2020-10-29 Thread Tristan Konolige via Apache TVM Discuss
I am a fan of approach A2. It seems like the python community is moving towards using poetry, and the poetry format is a lot nicer than requirements.txt for specifying dependencies. If we autogenerate requirements.txt, then everyone can use their preferred development tools. Can poetry build

[Apache TVM Discuss] [Development/RFC] [RFC] Rename Hybrid Script

2020-09-21 Thread Tristan Konolige via Apache TVM Discuss
Yes and no. Right now we do not need to differentiate. But in the future, functions in a module may either use be for TIR or for relay. --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-rename-hybrid-script/7915/13) to respond. You are receiving this because you enabled mailing list

[Apache TVM Discuss] [Development/RFC] [RFC] Rename Hybrid Script

2020-09-21 Thread Tristan Konolige via Apache TVM Discuss
I've put up an initial PR here: https://github.com/apache/incubator-tvm/pull/6522. An issue has come up, what do we name the python module? ## Option 1 We name the module `tvm.tvmscript`. Example usage: ```python import tvm # Can still use this though @tvm.script # or tvm.script.tir def my_fu

[Apache TVM Discuss] [Development/RFC] [RFC] Rename Hybrid Script

2020-09-17 Thread Tristan Konolige via Apache TVM Discuss
How about this for mixed TIR and Relay: class MixedModule: @relay.script def relay_func(x: ty.Tensor): return relay.call_tir_dest_passing(tir_func, x) @tir.script def tir_func(x: ty.handle) ... --- [Visit Topic](https:/

[Apache TVM Discuss] [Development/RFC] [RFC] Rename Hybrid Script

2020-09-15 Thread Tristan Konolige via Apache TVM Discuss
## Current issue TVM current has two different hybrid scripts: `te.hybrid.script` and `tvm.hybrid.script`. This leads to confusion as both scripts are similar but share different use cases and properties. This is especially confusing for new users as hybrid script can refer to either of these

[Apache TVM Discuss] [Development/RFC] RFC: Introduce automatic formatting of Python code

2020-09-08 Thread Tristan Konolige via Apache TVM Discuss
One thing to note about black is it does not support partial formatting of files, so we cannot run in on the diffs of every PR. It would be probably be best if we run it over the whole codebase before/when it is enabled. --- [Visit Topic](https://discuss.tvm.apache.org/t/rfc-introduce-aut

[TVM Discuss] [Development/RFC] [RFC][TESTING] Split testing based on cpu/gpu

2020-08-25 Thread Tristan Konolige via TVM Discuss
# Motivation Our current test suite takes a while to run. A main reason is that tests that only require a cpu are also being run on testing nodes that have gpus. With multiple PRs, tests running on gpus are often a limiting factor. Because demand is high, PRs have to wait until a gpu node is