Personally I am not quite into polyhedral optimization for now, mainly because 
most kernels in deep learning can get fine performance with handcrafted 
scheduling. For very computational intensive kernels we already have good 
vendor library support. Relatively, graph-level optimization is somehow more 
like low-hanging fruits.





---
[Visit Topic](https://discuss.tvm.ai/t/google-lasted-work-mlir-primer/1721/20) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/a653a41a8cfc7609eed55c34d62fb6802b685492d83135552f3ddd46eb44c188).

Tianqi Chen, UW, Seattle, WA, 98105, United States
http://tracking.discuss.tvm.ai/tracking/unsubscribe?msgid=vkBsreEhYwWepRb7tNuZWw2

Reply via email to