1. You are right, we actually tried to save the runtime.Module. We actually
also added the TIR export to the runtime.Module because the generated source
code is also put to it. We will try the different options you proposed as soon
as we can find the time.
2. In the meantime we have written an
Hi,
thanks for the reference to this function. I was not aware of it.
However I tried it on a Module(c) and got this:
`{
"root": 1,
"nodes": [
{
"type_key": ""
},
{
"type_key": "runtime.Module"
}
],
"b64ndarrays": [],
"attrs": {"tvm_version": "0.9.dev0"
We propose to implement an API to TVM that allows users to export the TIR
representation of the generated kernels as json.
**Motivation**
Json is a standard file format and can easily be processed in python, for
example. A json interface allows to analyze the generated kernels target
agnos
+1 from me as well!
--
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/apache/incubator-tvm/issues/6299#issuecomment-675857185
Ok, if I understood this example correctly here is a way to take advantage of
NCHWc in case of model already being in this format. What I'm looking for is
more universal approach, allowing to convert whole model to NCHWc **and not
each operation**.
---
[Visit
Topic](https://discuss.tvm.a
I'd like to ask if there is any plan or any simple way to enable block (NCHWc)
data format for whole model instead of plain (NCHW)?
After some playing with TVM it seems that the preferred data type is plain.
Significant exceptions are for Intel CPU and Intel Graphics scheduler which are
capab
Thank you @Laurawly for info.
Would you be able to explain a bit more on the topic? Especially I'd like to
better understand which parts of the intel_graphics scheduler are getting main
benefits from subgroups?
---
[Visit Topic](https://discuss.tvm.ai/t/opencl-autotvm-questions/2665/8) to
Thank you for answer. I'll check those materials.
---
[Visit Topic](https://discuss.tvm.ai/t/opencl-autotvm-questions/2665/6) to
respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/
I'm trying to enable AutoTVM for OpenCL (`intel_graphics` target). So far I
managed to have some success in that area, but values are multiple times worst
than for generic scheduler.
To begin with I am focusing only on conv2d operation (since this is also only
one currently present in `intel_