I have a custom layer "nucfpga.l2norm", and follow the doc [Convert Layout Pass
— tvm 0.8.dev0 documentation
(apache.org)](https://tvm.apache.org/docs/dev/convert_layout.html?highlight=finfercorrectlayout)
,I have set the `FInferCorrectLayout`.
RELAY_REGISTER_OP("nucfpga.l2norm")
Hi everybody,
I have an onnx model that I want to import into tvm. For the following snippet
code I've got the error:
...
shape_dict = {'input_16': (1,128,3)}
onnx_model = onnx.load('./mymodel.onnx')
sym, params = relay.frontend.from_onnx(onnx_model,shape_dict)
Error: Check fa
I have found below Relay IR when doing some job with ResNet-50, we can see the
2 add operators are can be merged to 1 add, the below log is build with
opt_level=3.
```
%21 = nn.conv2d(%20, meta[relay.Constant][2] /* ty=Tensor[(32, 8, 1, 1, 8,
8), int8] */, padding=[0, 0, 0, 0], channels=256,
@ganler following what you mention on multiple APIs
https://github.com/apache/tvm/pull/8418 this PR might provide more insights.
---
[Visit
Topic](https://discuss.tvm.apache.org/t/questions-about-tvm-executors-and-its-apis/10289/6)
to respond.
You are receiving this because you enabled m
the following is an example about how I get mod:
md= "mobilenet_v1_1.0_224_original.onnx"
dict= {input.0: [1,3,224,224]}
onnx_model = onnx.load(model_path)
mod, params = relay.frontend.from_onnx(onnx_model, dict,
freeze_params=True) # for unknown shape
now