Hi @masahi , I am not quite clear regarding this bias_add and add op folding
that you mentioned.
So, what I intend to achieve and what I assume you are also implying above is
as follows:
case 1:
>**before:** conv2d -> bias_add -> add (shift from batchnorm) is transformed to:
>
> **after tr
Thanks @masahi.
Okay so, I tried with this sequence of passes:
>seq1 = tvm.transform.Sequential(
>
> [relay.transform.InferType(),
> relay.transform.SimplifyInference(),
> relay.transform.FoldConstant(),
> relay.transform.FoldScaleAxis(),
> relay.tra
I hadn't run `bind_param_by_name`. I tried it now. I am now not seeing multiply
ops however I still see add ops in place of batchnorm ops. The script I am
using is given below.
Thanks @masahi !
> import onnx
>
> import tvm
>
> from tvm import relay
>
>from tvm.relay.build_module import bind_p
Hi @masahi ! Thanks for the quick response. I tried the sequence of passes you
suggested but still seeing the same effect, i.e., multiply and add ops in place
of Batchnorm op.
cc: @mbrookhart
---
[Visit
Topic](https://discuss.tvm.apache.org/t/batchnorm-op-fusion-in-tvm/12391/3) to
respo
Dear All,
I am looking for a set transformation passes in TVM that helps in
fusing/folding the Batchnorm ops into the previous or the next convolution-like
layers.
**My expectation :**
* **before batchnorm fold** : conv2d -> bias_add -> batch_norm
* **after batchnorm fold** : conv2d *(po