Well, I have a special BYOC **dense** kernel that deals with kernel layout
different from default topi.nn implemenation.
The default implemenation has *weight Tensor with shape [out_dim, in_dim]*,
while I need [in_dim, out_dim].
Two questions here:
1. How can I change the default behavior o
Did you try the API mentioned in this tutorial? Specifically, it uses
`print(task.print_best(log_file))`. The API you used is for getting the
schedule for compilation, not for printing.
https://tvm.apache.org/docs/tutorials/auto_scheduler/tune_matmul_x86.html#using-the-record-file
---
[Vi
The documentation lists that as a method of `tvm.auto_scheduler.ComputeDAG` we
can get a Python code representation of the schedule with
[`print_python_code_from_state()`](https://tvm.apache.org/docs/api/python/auto_scheduler.html?highlight=auto_scheduler#tvm.auto_scheduler.ComputeDAG.print_pyt
Since Relay is a graph-level IR, its ops do not have the compute and schedule
but just the input and output types, latency measurement has to happen at the
TIR level. If you want to profile the latency of each op, you could turn off op
fusion.
However, simply turn off fusion will result in er
I am wandering what does this error mean?
`Check failed: arg->scope == value->scope: Argument local.in_buffer Buffer bind
scope mismatch`
Here I code a matrix multiplication and I want to intrisic to tensorize, as the
code follows:
data = tvm.te.placeholder((64, 64), dtype="int8", name="
@JosseVanDelm Thanks. I was reading about it, and it might be helpful. I will
update here if I find something more detailed.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/print-out-schedule-for-debugging/1885/9)
to respond.
You are receiving this because you enabled mailing list mod
There exists something like
[TEDD](https://tvm.apache.org/docs/tutorials/language/tedd.html)
But it is not as finegrained as in @YuanLin 's answer.
You can always use `tvm.lower(schedule,[input_placeholders],
simple_mode=True),` to get the "for loop view" though
---
[Visit
Topic](https:/
Thanks for your reply @areusch !
[quote="areusch, post:2, topic:9548"]
Is tensorization an option here, or do you need to do more with the TIR after
schedule generation?
[/quote]
Yes, i'm currently trying to use tensorization to map entire convolutions and
data preparation steps (data layout,
For a project, I want to train a number of models that can predict the
execution time of a layer (from its relay description) on different hardware
targets.
My current problem is, that I am unable to find a nice option to do this.
The Debug Runtime measures the execution time for the low level