[Apache TVM Discuss] [Questions] Best way to deal with kernel layout?

2021-03-30 Thread JC Li via Apache TVM Discuss
Well, I have a special BYOC **dense** kernel that deals with kernel layout different from default topi.nn implemenation. The default implemenation has *weight Tensor with shape [out_dim, in_dim]*, while I need [in_dim, out_dim]. Two questions here: 1. How can I change the default behavior o

[Apache TVM Discuss] [Questions] [autoscheduler] Print tuned Python schedule

2021-03-30 Thread Cody H. Yu via Apache TVM Discuss
Did you try the API mentioned in this tutorial? Specifically, it uses `print(task.print_best(log_file))`. The API you used is for getting the schedule for compilation, not for printing. https://tvm.apache.org/docs/tutorials/auto_scheduler/tune_matmul_x86.html#using-the-record-file --- [Vi

[Apache TVM Discuss] [Questions] [autoscheduler] Print tuned Python schedule

2021-03-30 Thread Wheest via Apache TVM Discuss
The documentation lists that as a method of `tvm.auto_scheduler.ComputeDAG` we can get a Python code representation of the schedule with [`print_python_code_from_state()`](https://tvm.apache.org/docs/api/python/auto_scheduler.html?highlight=auto_scheduler#tvm.auto_scheduler.ComputeDAG.print_pyt

[Apache TVM Discuss] [Questions] Profile on Relay Level?

2021-03-30 Thread Cody H. Yu via Apache TVM Discuss
Since Relay is a graph-level IR, its ops do not have the compute and schedule but just the input and output types, latency measurement has to happen at the TIR level. If you want to profile the latency of each op, you could turn off op fusion. However, simply turn off fusion will result in er

[Apache TVM Discuss] [Questions] Buffer bind scope mismatch

2021-03-30 Thread Wu Zheng via Apache TVM Discuss
I am wandering what does this error mean? `Check failed: arg->scope == value->scope: Argument local.in_buffer Buffer bind scope mismatch` Here I code a matrix multiplication and I want to intrisic to tensorize, as the code follows: data = tvm.te.placeholder((64, 64), dtype="int8", name="

[Apache TVM Discuss] [Questions] Print out schedule for debugging

2021-03-30 Thread xintin via Apache TVM Discuss
@JosseVanDelm Thanks. I was reading about it, and it might be helpful. I will update here if I find something more detailed. --- [Visit Topic](https://discuss.tvm.apache.org/t/print-out-schedule-for-debugging/1885/9) to respond. You are receiving this because you enabled mailing list mod

[Apache TVM Discuss] [Questions] Print out schedule for debugging

2021-03-30 Thread Josse Van Delm via Apache TVM Discuss
There exists something like [TEDD](https://tvm.apache.org/docs/tutorials/language/tedd.html) But it is not as finegrained as in @YuanLin 's answer. You can always use `tvm.lower(schedule,[input_placeholders], simple_mode=True),` to get the "for loop view" though --- [Visit Topic](https:/

[Apache TVM Discuss] [Questions] Feedback on TVM port to custom accelerator

2021-03-30 Thread Josse Van Delm via Apache TVM Discuss
Thanks for your reply @areusch ! [quote="areusch, post:2, topic:9548"] Is tensorization an option here, or do you need to do more with the TIR after schedule generation? [/quote] Yes, i'm currently trying to use tensorization to map entire convolutions and data preparation steps (data layout,

[Apache TVM Discuss] [Questions] Profile on Relay Level?

2021-03-30 Thread Max Sponner via Apache TVM Discuss
For a project, I want to train a number of models that can predict the execution time of a layer (from its relay description) on different hardware targets. My current problem is, that I am unable to find a nice option to do this. The Debug Runtime measures the execution time for the low level