I meet the same problem. And I‘d like to ask which llvm version was working in
the end?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/autotuning-how-to-debug-when-all-trials-are-failing-on-gpu/2833/5)
to respond.
You are receiving this because you enabled mailing list mode.
To unsu
Thank you very much. I have understand it
---
[Visit
Topic](https://discuss.tvm.apache.org/t/relay-cannot-compile-while-loop/8294/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/em
Hmm, I’m not quite sure if pattern matcher will go into Relay functions for
matching. I’ll check it later but maybe @mbrookhart could comment.
Meanwhile, maybe we can make a new FunctionPattern that matches function nodes.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-to-match-th
Thanks! I had checked that out, but seems it doesn't show a way to match a
function. In my case conv+mul+add+relu is already wrapped into a function, so I
failed to match them directly. One example in the tutorial related to function
matching uses function attr, but it looks like the function
Check this out: https://tvm.apache.org/docs/langref/relay_pattern.html
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-to-match-the-pattern-of-a-function-in-relay/8283/2)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [c
@masahi FromTupleType is the one you probably want it takes a Type representing
the layout of `expr` and returns a sequence of expressions which correspond to
the linearized view of the tuple, i.e it will handle projecting nested tuples
out.
---
[Visit
Topic](https://discuss.tvm.apache.o
You cannot use `relay.build(...)` to build a model with control flow. For that,
you need to use VM.
See for example,
https://github.com/apache/incubator-tvm/blob/efe3a79aacd934ea5ffb13170230bf199a473e72/tests/python/frontend/pytorch/test_forward.py#L1914
---
[Visit
Topic](https://discu
I want to test the usage of relay while_loop and write the following simple
example
```
x = relay.var("x", shape=(10, 20))
i = relay.var("i", shape=tuple(), dtype="int32")
def myfun(x, i):
z = relay.add(x, relay.const(1, "float32"))
j = relay.add(i, relay.const(1, "in
thanks, I'll take a look
---
[Visit
Topic](https://discuss.tvm.apache.org/t/graph-plan-memory-doesnt-support-nested-tuples/8278/8)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/emai
The helpers are here:
https://github.com/apache/incubator-tvm/blob/98c2096f4944bdbdbbb2b7b20ccd35c6c11dfbf6/src/relay/op/memory/memory.cc#L287-L300
---
[Visit
Topic](https://discuss.tvm.apache.org/t/graph-plan-memory-doesnt-support-nested-tuples/8278/7)
to respond.
You are receiving this
There is a C++ helper called Linearize or FlattenTuple (can look later)
---
[Visit
Topic](https://discuss.tvm.apache.org/t/graph-plan-memory-doesnt-support-nested-tuples/8278/6)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [c
Ok, thanks! I found the code Jared was probably referring to
(t`ransform/memory_plan.py`, `transform/memory_alloc.py`, not sure why they are
written in python). I'm going to learn about memory planning and see what I can
do.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/graph-plan-m
@masahi there is code for doing this mapping inside of the VM, if you message
me on Slack we can probably figure out how to update the code, might require a
bit of debugging
---
[Visit
Topic](https://discuss.tvm.apache.org/t/graph-plan-memory-doesnt-support-nested-tuples/8278/4)
to respo
Yes, we will need to update the code if we want to support nested tuple.
Perhaps we can pass he token around also in nested tuples and unpack them.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/graph-plan-memory-doesnt-support-nested-tuples/8278/3)
to respond.
You are receiving this
Good catch @masahi :grinning:
---
[Visit
Topic](https://discuss.tvm.apache.org/t/understanding-tvm-relays-partitiongraph-mod-function/8290/5)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apac
Isn't it simply a problem of free variables? I suggest replacing
```
f = relay.Function([], result)
```
with
```
f = relay.Function(relay.analysis.free_vars(result), result)
```
---
[Visit
Topic](https://discuss.tvm.apache.org/t/understanding-tvm-relays-partitiongraph-mod-function/8290/4)
The recent PR should fix this:
https://github.com/apache/incubator-tvm/pull/6641
See this unit test:
https://github.com/apache/incubator-tvm/blob/main/tests/python/relay/test_pass_annotate_target.py#L358
---
[Visit
Topic](https://discuss.tvm.apache.org/t/understanding-tvm-relays-partitio
Ping @comaniac @manupa-arm. I have a feeling the if/else handling in this pass
might not be correct. Are you only seeing this problem when you have an If?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/understanding-tvm-relays-partitiongraph-mod-function/8290/2)
to respond.
You are r
Hi All,
I am working on trying to understand TVM/Relay’s graph partitioning
functionalities. Specifically, I have created the following simple example, and
I am getting the error as follows.
I understand that PartitionGraph() function assumes the graph is annotated with
target with Annotat
Is there an API to make a clone (aka deepcopy) of a module and params?
Thanks
---
[Visit Topic](https://discuss.tvm.apache.org/t/cloning-a-nn-model/8288/1) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://di
@kwmaeng I've written the sparse_dense kernel for GPUs. It was a little bit of
an arduous process, but here are my takeaways:
- Using te only works for some sparse kernels. Sparse kernels are often written
as functions over the input tensor. Unfortunately, te requires you to write
your kernel
We (@manupa-arm) ran into this in the graph partitioner. I think in the end we
were forced to introduce logic to flatten such tuples, so if a more fundamental
solution can be found that would simplify our logic.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/graph-plan-memory-doesnt-s
def @main(%data: Tensor[(1, 112, 112, 32), float32]) -> Tensor[(1, 112,
112, 64), float32] {
%3 = fn (%p0: Tensor[(1, 112, 112, 32), float32], %p1: Tensor[(3, 3, 32,
1), float32], %p2: Tensor[(1, 1, 1, 32), float32], %p3: Tensor[(1, 1, 1, 32),
float32], Primitive=1) -> Tensor[(1, 112
23 matches
Mail list logo