Thanks @masahi. 

Okay so, I tried with this sequence of passes: 

>seq1 = tvm.transform.Sequential(
>
>         [relay.transform.InferType(),
>         relay.transform.SimplifyInference(),
>          relay.transform.FoldConstant(),
>          relay.transform.FoldScaleAxis(),
>          relay.transform.SimplifyInference(),
>          relay.transform.FoldConstant()
>         ])

I get "add" ops as it is; they are not getting folded to the preceding conv2d's 
bias. 

Also, if suppose there is no bias_add corresponding to a conv2d but batchnorm 
is there so after folding the batchnorm will a new bias_add op be created 
eventually to adjust the shift or the shift will remain as an add op in that 
case?





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/batchnorm-op-fusion-in-tvm/12391/7) to 
respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/6fbd906289a1e5b6a84436c58756c459409e123ddf0ed3f1f09c650e060223c1).

Reply via email to