[quote="comaniac, post:7, topic:8290"]
`MergeCompilerRegion`
[/quote]
Thank @comaniac.
I have tried to use MergeCompilerRegion, and it is giving me an error with the
following code. The following code works (I commented out MergeCompilerRegion),
and it produces output with UNMERGED @special_
We have many examples for operator conversion in frontend/pytorch.py.
I don't recommend modifying the max pool implementation. TVM doesn't take into
account the indices from max pool, so you need to modify code everywhere.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-to-modify-a
@masahi Thank you for answering the question
**https://github.com/apache/incubator-tvm/issues/6761**
I also want to ask if it is difficult to modify the underlying implementation
of **max pool op** to support **recording maximum index**? Does tvm have a
specific example code for adding an ope
> For example, if I have two “add” operators in my true and false branch, and
> I’d like to partition the true and false branches separately, can
> PartitionGraph() can help me?
This is exactly PartitionGraph does.
> To me, it looks like ParitioGraph() seems limited because it partitions base
Hi All,
Thanks @masahi, @comaniac and @matt-arm. It worked now. That being said, I'd
like to confirm 1) if I understand the functionality of PartitionGraph()
function in the relay, 2) I'd like to understand if ParitioGraph() can be used
for my specific use case.
Here is my understanding of
Yeah... You're right. I copied it from conv+bn+relu pattern and forgot to
delete the line for tuple. But removing it still doesn't make it work though.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-to-match-the-pattern-of-a-function-in-relay/8283/17)
to respond.
You are receivin
Why your pattern has a tuple node? It seems to me that it tries to match
```
%0 = nn.conv2d(%p0, %p1, padding=[1, 1, 1, 1], groups=32, channels=32,
kernel_size=[3, 3], data_layout="NHWC", kernel_layout="HWIO") /* ty=Tensor[(1,
112, 112, 32), float32] */;
%1 = multiply(%0, %p2);
%2
@comaniac OK I see. I think FunctionPattern would solve it all for me here.
Glad to see the team has a plan on this!
@mbrookhart
[quote="mbrookhart, post:5, topic:8283, full:true"]
It will definitely go inside the function to match patterns, but you're right,
we don't have a Function Pattern
It should be configurable but depends on your use case. You could first figure
out when FuseOps is invoked (and by which API) in your script, and we can see
if that makes sense to run pattern matching and merge composite beforehand,
Otherwise, you may still need FunctionPattern.
---
[Visi
Thank you all for the discussion!
When a module is built, do the pattern matching and the MergeComposite pass
currently happen before FuseOps or after? Is it possible to make it
configurable, as in some use cases (like mine) the matching and merging takes
the results of some previous passes,
I also +1 this feature.
It seems that what you want to do is very similar to a previous post I had
https://discuss.tvm.apache.org/t/relay-pattern-replacing-with-custom-relay-ops/7531/13
The biggest fear I have of introducing a new Relay operator (which I guess is
what would eventually happen
@comaniac sounds good, we'll see who gets there first :)
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-to-match-the-pattern-of-a-function-in-relay/8283/11)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](h
Thanks @mbrookhart @matt-arm for the valuable information! Here is a short
summary of this issue.
- The problem OP has should be resolved by simply matching and partitioning
patterns before `FuseOps`.
- In addition to that, it would still be better to have `FunctionPattern` that
matches and
I'd like to be able to rewrite a composite function into a single op. Obviously
can be done with an ExprMutator but it'd be more convenient if the pattern
language could do it.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-to-match-the-pattern-of-a-function-in-relay/8283/9)
to r
Hi All,
I am working on trying run tvm in window10,now I have built tvm and llvm with
cuda10.2 llvm9.0(maybe it does not build successfully),but when i run example
demo, it will return some errors.
this is my demo:

this is my error:
I have to agree with @mbrookhart with his statement that pattern
rewriting/partitioning comes before FuseOps. Could you provide more information
as to why this would be useful?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-to-match-the-pattern-of-a-function-in-relay/8283/8)
to r
I'm happy to add it, but it will be a couple of days before I can get to it.
Any one else interested in adding the node and some matching tests?
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-to-match-the-pattern-of-a-function-in-relay/8283/7)
to respond.
You are receiving this b
+1 for this feature - I have some use-cases where it would be valuable to match
composite functions.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-to-match-the-pattern-of-a-function-in-relay/8283/6)
to respond.
You are receiving this because you enabled mailing list mode.
To un
It will definitely go inside the function to match patterns, but you're right,
we don't have a Function Pattern right now, we should probably add one.
This seems to be a function created by the FuseOps pass. Typically we'd do
pattern rewriting/partitioning before that, maybe there's a simpler
Whether autotvm supports searching the best parameters of 3D network on CPU and
GPU?If there are successful cases to share, thank!
---
[Visit
Topic](https://discuss.tvm.apache.org/t/does-autotvm-support-3d-networks-for-cpu-and-gpu/8304/1)
to respond.
You are receiving this because you en
20 matches
Mail list logo