Thanks for the reply Kevin! Those two layout trans make sense, but for filter
parameters, they're loaded from .pth with OIHW by
default(relay/frontend/pytorch.py) and I set desired_layout for HWIO. Will
these filter parameters be transformed in advanced or by a cuda kernel in each
inference?
If original model layout is NCHW and you convert to NHWC in TVM, at least two
layout transformation are required: one at the beginning and one at the end.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/how-dose-tvm-elimitate-calls-of-conv-weights-layout-transform/8208/3)
to respond.
@Jiali this could be due to the fact that the ratio of valid schedules over an
entire search space is relatively low, so population sampling cannot find any
valid schedule. If the evolutionary search is based on a set of invalid
schedules, it is highly possible that it will be trapped in inval
After reading these two links:
[https://discuss.tvm.apache.org/t/layout-conversion-pass/4009/15](https://tvm.apache.org/docs/dev/convert_layout.html)
[https://tvm.apache.org/docs/dev/convert_layout.html](https://tvm.apache.org/docs/dev/convert_layout.html)
I'm still confused that
Hi everyone!
I modified this
sample(https://tvm.apache.org/docs/tutorials/frontend/from_pytorch.html) to
add desired_layout NHWC to the network saved from pytorch(which uses NCHW):
```python
desired_layouts = {'qnn.conv2d': ['NHWC', 'HWIO'],
'nn.conv2d': ['NHWC', 'HWIO