Thanks for taking a look @tqchen! Since scheduling will be completed with 
TensorIR, it will provide the building blocks for being plugged into an 
IRModule=>IRModule transformation pass. For our current use-case, it's 
important to be able to fallback to previous optimizations in the form of TE 
schedules / TOPI where coverage of the TensorIR schedules doesn't exist. 

>From the [proposed 
>strategy](https://discuss.tvm.apache.org/t/discuss-tvm-core-strategy-for-operator-scheduling-and-tuning/16352),
> I understand it's important to ensure the schedule can operate on a generic 
>compute definition of the operation. In the case of matmul-style operations, 
>we'd want to apply "array packing" to the input, which is currently expressed 
>via the compute definition. Is it possible to express this through TIR 
>scheduling alone?

-- 
Reply to this email directly or view it on GitHub:
https://github.com/apache/tvm-rfcs/pull/107#issuecomment-1944331637
You are receiving this because you are subscribed to this thread.

Message ID: <apache/tvm-rfcs/pull/107/c1944331...@github.com>

Reply via email to