+1
--
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/12743#issuecomment-1241259282
You are receiving this because you are subscribed to this thread.
Message ID:
@gromero I think I am getting a little confused at the difference between
messages in the commits composing the PR and the final commit in main. To make
things clearer, I think it would help to refer to to commit title and commit
message as PR title and PR description, respectively. PR title and
Ok, I think I understand things a little bit better now. Thanks for the
explanation! I can see how if the IRBuilder handles all state then there is not
really much of a point to having classes for the parser. It might be worthwhile
to mention having a stateful parser class as an alternative in t
I'm still a bit confused on 2. The example you give for a single class is one
that has all static members, but what if instead it was just a regular class.
It would instantiate a single instance of said class to do parsing. This is
what relax is doing
(https://github.com/tlc-pack/relax/blob/25c
> OTOH overt time I did realize that is not always the case that contributors
> void push force and eventually some will do it. BTW, we don’t have any rule /
> guideline to encourage (or dis-encourage it). Let me know if besides keeping
> the conversations in the PR (which is ultimately a GH l
Meta scheduler (autotir) has not been full merged yet. I believe it will work
on sparse workloads, but I don't know how well it compares to the hand written
kernels.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/sparse-opencl-error-scheduling-sparse-computations-that-use-tir-ir-build
Should we be using the formal RFC process for this? (Submitting this RFC as a
PR to the tvm-rfcs repo).
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-meta-schedule-autotensorir/10120/5)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe fro
+1
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm/issues/7991#issuecomment-833770287
Thanks for the PR @zackcquic.
1. Why would you like to keep runtime profiling and pass profiling separate?
The benefit I see is that a lot of the code is similar. We could avoid a lot of
code duplication. On the other hand runtime profiling has does have a lot of
code around handling timing o
I can reproduce it now. To me it looks like a bug in scheduling. Maybe @tqchen
knows why this is happening?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/quantization-and-3d-convolution/8338/10)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscrib
I believe this line is the issue as it occurs before `threadIdx.z` is defined.
[quote="OValery16, post:6, topic:8338"]
`allocate(compute, int32, [(((floordiv(((threadIdx.z: int32*2) + 1), 4)*32) +
32) - (floordiv(threadIdx.z, 2)*32))]);`
[/quote]
However, I cannot reproduce this issue with the
I'm not super familiar with autotvm and auto scheduling, but I've got a couple
questions:
1. What is the interaction between autoscheduler and autotvm in the future.
Will we be unifying the user api for autotvm and auto scheduling? Can you mix
auto scheduling and autotvm?
2. Why is the `GraphR
@jroesch and I were talking about this a little. We were thinking of
subclassing Span. You'd have SourceSpan which comes from files and then
ModelSpan (probably could use a better name) for handling layers/nodes in
models. This gets around the issue of having meaningless line fields for spans
Could you print out the lowered code? You can use `tvm.lower(s, args)` where
`s` is the schedule. Also, if you provide a minimal example to run, I can take
a look at it.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/quantization-and-3d-convolution/8338/5)
to respond.
You are receiv
Hello @OValery16, I believe the issue you are encountering is that you are
calling `te.thread_axis("threadIdx.z")` multiple times. Instead, can you try
creating the thread axis once with `thread_z = te.thread_axis("threadIdx.y")`
and then use it like so: `s[output].bind(s[output].fuse(tf, td),
I am a fan of approach A2. It seems like the python community is moving towards
using poetry, and the poetry format is a lot nicer than requirements.txt for
specifying dependencies. If we autogenerate requirements.txt, then everyone can
use their preferred development tools.
Can poetry build
Yes and no. Right now we do not need to differentiate. But in the future,
functions in a module may either use be for TIR or for relay.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-rename-hybrid-script/7915/13) to
respond.
You are receiving this because you enabled mailing list
I've put up an initial PR here:
https://github.com/apache/incubator-tvm/pull/6522.
An issue has come up, what do we name the python module?
## Option 1
We name the module `tvm.tvmscript`.
Example usage:
```python
import tvm
# Can still use this though
@tvm.script # or tvm.script.tir
def my_fu
How about this for mixed TIR and Relay:
class MixedModule:
@relay.script
def relay_func(x: ty.Tensor):
return relay.call_tir_dest_passing(tir_func, x)
@tir.script
def tir_func(x: ty.handle)
...
---
[Visit Topic](https:/
## Current issue
TVM current has two different hybrid scripts: `te.hybrid.script` and
`tvm.hybrid.script`. This leads to confusion as both scripts are similar but
share different use cases and properties. This is especially confusing for new
users as hybrid script can refer to either of these
One thing to note about black is it does not support partial formatting of
files, so we cannot run in on the diffs of every PR. It would be probably be
best if we run it over the whole codebase before/when it is enabled.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/rfc-introduce-aut
# Motivation
Our current test suite takes a while to run. A main reason is that tests that
only require a cpu are also being run on testing nodes that have gpus. With
multiple PRs, tests running on gpus are often a limiting factor. Because demand
is high, PRs have to wait until a gpu node is
22 matches
Mail list logo