Hi,

Even though I don't think I understood everything, I like the idea of solving 
some of the limitations of `te.compute`. Since the `te.compute` is in a central 
part of the TVM stack changing it requires a lot of work and understanding. So 
thank you all for continuing such development.

Q1: I was wondering how this fits in the Relay ->Topi->TE->TIR flow. In a more 
specific case take FuseOps. AFAIK the FuseOps pass creates the multi-stage 
operator based on the `te.compute`s. Since you mentioned that there would be no 
notion of a `stage` in the new TensorIR, how would FuseOps work? and more 
generally how TVM's philosophy of "defining a compute rule and a separate 
schedule" be changed?

[quote="Hzfengsy, post:1, topic:7872"]
TE has limited expressiveness since each stage is defined by `Stage = 
te.compute(lambda expr)` , while TensorIR is a full c+±like IR. We can write 
any program with TensorIR as you want. Although not all programs can be 
scheduled, there are still more workloads that can be optimized by TensorIR.
[/quote]

Q2: What exactly do you mean by "not all program can be scheduled"? maybe an 
example?

Q3: You mentioned "new scheduling primitives", could you maybe give a list?

EDIT:
Q4: Expected timeline of the steps?





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/rfc-tensorir-a-schedulable-ir-for-tvm/7872/6)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/4c0ef8be45bfe6cfef23f1dfe5e0407dbf9ced10d399080ab68a174431c19b49).

Reply via email to