[Apache TVM Discuss] [Questions] Batchnorm op Fusion in TVM

2022-03-24 Thread Aakanksha via Apache TVM Discuss
Hi @masahi , I am not quite clear regarding this bias_add and add op folding that you mentioned. So, what I intend to achieve and what I assume you are also implying above is as follows: case 1: >**before:** conv2d -> bias_add -> add (shift from batchnorm) is transformed to: > > **after tr

[Apache TVM Discuss] [Questions] Batchnorm op Fusion in TVM

2022-03-23 Thread Aakanksha via Apache TVM Discuss
Thanks @masahi. Okay so, I tried with this sequence of passes: >seq1 = tvm.transform.Sequential( > > [relay.transform.InferType(), > relay.transform.SimplifyInference(), > relay.transform.FoldConstant(), > relay.transform.FoldScaleAxis(), > relay.tra

[Apache TVM Discuss] [Questions] Batchnorm op Fusion in TVM

2022-03-23 Thread Aakanksha via Apache TVM Discuss
I hadn't run `bind_param_by_name`. I tried it now. I am now not seeing multiply ops however I still see add ops in place of batchnorm ops. The script I am using is given below. Thanks @masahi ! > import onnx > > import tvm > > from tvm import relay > >from tvm.relay.build_module import bind_p

[Apache TVM Discuss] [Questions] Batchnorm op Fusion in TVM

2022-03-23 Thread Aakanksha via Apache TVM Discuss
Hi @masahi ! Thanks for the quick response. I tried the sequence of passes you suggested but still seeing the same effect, i.e., multiply and add ops in place of Batchnorm op. cc: @mbrookhart --- [Visit Topic](https://discuss.tvm.apache.org/t/batchnorm-op-fusion-in-tvm/12391/3) to respo

[Apache TVM Discuss] [Questions] Batchnorm op Fusion in TVM

2022-03-23 Thread Aakanksha via Apache TVM Discuss
Dear All, I am looking for a set transformation passes in TVM that helps in fusing/folding the Batchnorm ops into the previous or the next convolution-like layers. **My expectation :** * **before batchnorm fold** : conv2d -> bias_add -> batch_norm * **after batchnorm fold** : conv2d *(po