## Motivation:
We want to port the DL models in Relay IR. For that, we want to serialize the
Relay IR to disk. Once serialized third-party frameworks, compilers should be
able to import those. We want the serialization format to be compact, portable,
widely adopted and having well-documented
@jroesch, @tqchen, Regarding the naming convention discussion on the PR, I
agree the converter does not seem to be the correct word. The suggested words
by you are either 'export' or 'target'. I think 'export' should be used as it
is more in line with other DL frameworks. Please let me know
Thanks @tqchen for comments.
To elaborate more here, the support for Relay to ONNX serialization will help
us to take advantage of hardware-specific optimizations supported by different
compilers. The ONNX format is mostly adopted. If a particular compiler supports
a specific format, support f
**Option C0:**
The original intention was to use Relay to ONNX as serialization format only.
**Option C1:**
It seems interesting and can fit naturally in TVM. But wanted to discuss a few
of the points below.
First, let me put down the different properties or attributes of a target in
gene
So we will be adding support for ONNX codegen only.
I will work on adding a codegen for ONNX and then will work on an example ONNX
runtime to demonstrate end to end functionality. I will also be improving
operator coverage for ONNX.
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-relay-to
Sure. That makes sense.
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-relay-to-onnx/6101/14) to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/df6eb7330deebf802fe628c4cb592242b
@smallcoscat, Thanks. Looking forward to collaborate with you.
I will get my PR with basic coverage in TVM repo and then you can send in your
PR as well. So that we increase the overall coverage in terms of ops and
models. Sounds good?
I will need some time to work on codegen part to implement
@tqchen, I tried to add ONNX as target, but since target codegen receives
lowered IRModule with PrimFunc nodes, I am not able to convert those to ONNX.
However, as in the case of external codegen lowering is deferred to external
codegens, I am receiving IRModule without PrimFunc nodes and I am
@smallcoscat, Thanks. I also followed this tutorial and was able to create ONNX
codegen for external runtime. Relevant code in this PR:
https://github.com/maheshambule/tvm/pull/9
However, as suggested by @tqchen when I tried to implement 'ONNX' as target
(and not as external codegen), I am fa
@tqchen, Just to be on same page. Could you please confirm below?
We need NOT to register ONNXModule as "target.build.onnx". If registered this
way, it will get invoked from here when we specify target as "onnx".
https://github.com/apache/incubator-tvm/blob/2cd987d92724be0f859bfb624ce797f9c7016
Ok. Thanks for clarification. I will update the PR.
---
[Visit Topic](https://discuss.tvm.ai/t/rfc-relay-to-onnx/6101/26) to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe/87c25
While adding an operator in Topi, most of the time we need to put certain
checks on dynamic input data values for which values are known at runtime only.
Is there a way in the form of tensor expression, to assert values in compute
definition?
For ex. An operator accepts indices as dynamic in
@junrushao1994, Thanks. I am not using hybrid script but this will help while
using hybrid scripts. I will look into it.
Also, does this mean TVM does not support assert natively and you have to use
hybrid script for that?
---
[Visit
Topic](https://discuss.tvm.ai/t/how-to-add-assertion-ch
13 matches
Mail list logo