I see. I missed the implementation detail point. My first preference is place
it inside `Type` (but I guess that maybe is not the preferred choice as of now
given how frameworks handles layout).
The second option that you give is pretty good too. However, how do we read the
layout for example i
I agree that layout need to be treated carefully and handling layout deserves
major effort from the compilation optimization side.
Technically, the only question is how to store these information:
- Types are stored in a row based format(each Expr has a type)
- Layout and other attributes could
If its ok, I will give a couple of reasons why I think treating layout as first
class citizens is important (The world can do with one *more* opinion :) )
* It seems to me that layout was an afterthought for the frameworks. They
started with just one layout, as deep learning progressed, we reali
Right now the design decision is to not put Layout as part of the type system,
because most of the original deep learning programs do not have layout and they
are implied in the operator. We could however, provide a separate column of
layout attribute that get passed in during rewriting if we ne
Check this tutorial out which tunes networks to take advantage of AVX512, and
thus imposes NCHWc constraint on the data layout:
https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_x86.html
---
[Visit
Topic](https://discuss.tvm.ai/t/use-block-data-format-for-whole-model/3693/2)
to respond.
Y
I'd like to ask if there is any plan or any simple way to enable block (NCHWc)
data format for whole model instead of plain (NCHW)?
After some playing with TVM it seems that the preferred data type is plain.
Significant exceptions are for Intel CPU and Intel Graphics scheduler which are
capab