Does TVM have any built-in or automated support for tensor packing 
transformations?

I'm referring to optimizations similar to those described in the 
https://tvm.d2l.ai/chapter_cpu_schedules/packed_conv.html#ch-packed-conv-cpu, 
where the data layout is changed (e.g., from NCHW to a packed format like 
NCHW{x}c) to improve cache locality and SIMD utilization on CPUs.

I’d like to know:

1. Can this kind of tensor packing transformation be applied automatically via 
MetaSchedule or other TVM auto-tuning/IR passes?
2. Or, is this kind of packing generally done manually through scheduling 
primitives and layout transforms?





---
[Visit Topic](https://discuss.tvm.apache.org/t/tensor-packing-in-tvm/18501/1) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/4c149fd2bccb2995287f6b61511780e6b6f699d17488ea4bf5227923c8f51a56).

Reply via email to