I will have a try. Thanks very much.
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-problem-about-subgraph-with-tupletypenode-inputs/6522/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/e
I have the same problem. How did you solve it?
我遇到一样的问题,您是怎么解决的?
---
[Visit
Topic](https://discuss.tvm.ai/t/pytorch-onnx-model-nn-conv2-in-particular-dimension-1-conflicts-4096-does-not-match-512/6255/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubsc
@matt-arm : you may find this interesting.
---
[Visit
Topic](https://discuss.tvm.ai/t/byoc-problem-about-subgraph-with-tupletypenode-inputs/6522/3)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tv
At this moment we suggest your codegen flatting a tuple here:
https://github.com/apache/incubator-tvm/blob/master/src/relay/backend/contrib/dnnl/codegen.cc#L141
In addition, when processing concatenate nodes, your codegen can retrieve the
tuple information when processing concatenate nodes by
Hello TVM community,
I am working on a problem where I would like to split the host and device
functionality depending on whether target and target_host are same. If target
is same as target_host, I would like to avoid splitting the IR. If target is
different from target_host, then the device
Now, I want to use BYOC to run SSD-ResNet34 model and I met some problems.
About the "concatenate" operator, if it is a subgraph, the partitioned graph is:
def @ssdnn_0(%ssdnn_0_i0: (Tensor[(64, 4, 5776), float32], Tensor[(64, 4,
2166), float32], Tensor[(64, 4, 600), float32], Tensor[(64,
it is VPN? emmm,what is the name of VPN?HALLO HALLO HALLO
---
[Visit
Topic](https://discuss.tvm.ai/t/warningfailed-to-download-tophub-package-for-llvm-urlopen-error-errno-111-connection-refused/5759/6)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubsc