I have same questions. Also I'm confused about the log format generated by
AutoSchedule, it's total different with AutoTVM
---
[Visit
Topic](https://discuss.tvm.apache.org/t/autotvm-vs-autoscheduler-tuning-metrics/10300/2)
to respond.
You are receiving this because you enabled mailing li
Thanks for the reply Kevin! Those two layout trans make sense, but for filter
parameters, they're loaded from .pth with OIHW by
default(relay/frontend/pytorch.py) and I set desired_layout for HWIO. Will
these filter parameters be transformed in advanced or by a cuda kernel in each
inference?
After reading these two links:
[https://discuss.tvm.apache.org/t/layout-conversion-pass/4009/15](https://tvm.apache.org/docs/dev/convert_layout.html)
[https://tvm.apache.org/docs/dev/convert_layout.html](https://tvm.apache.org/docs/dev/convert_layout.html)
I'm still confused that
Hi everyone!
I modified this
sample(https://tvm.apache.org/docs/tutorials/frontend/from_pytorch.html) to
add desired_layout NHWC to the network saved from pytorch(which uses NCHW):
```python
desired_layouts = {'qnn.conv2d': ['NHWC', 'HWIO'],
'nn.conv2d': ['NHWC', 'HWIO