I have found below Relay IR when doing some job with ResNet-50, we can see the 
2 add operators are can be merged to 1 add, the below log is build with 
opt_level=3.
```
  %21 = nn.conv2d(%20, meta[relay.Constant][2] /* ty=Tensor[(32, 8, 1, 1, 8, 
8), int8] */, padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1], 
data_layout="NCHW8c", kernel_layout="OIHW8i8o", out_layout="NCHW8c", 
out_dtype="int32") /* ty=Tensor[(1, 32, 56, 56, 8), int32] */;
  %22 = add(%21, meta[relay.Constant][3] /* ty=Tensor[(1, 32, 1, 1, 8), int32] 
*/) /* ty=Tensor[(1, 32, 56, 56, 8), int32] */;
  %23 = add(%22, 32 /* ty=int32 */) /* ty=Tensor[(1, 32, 56, 56, 8), int32] */;
  %24 = right_shift(%23, 6 /* ty=int32 */) /* ty=Tensor[(1, 32, 56, 56, 8), 
int32] */;
  %25 = clip(%24, a_min=-127f, a_max=127f) /* ty=Tensor[(1, 32, 56, 56, 8), 
int32] */;
  %26 = cast(%25, dtype="int8") /* ty=Tensor[(1, 32, 56, 56, 8), int8] */;
```
Anyone who know whether Relay can't support these optimization now?
https://github.com/apache/tvm/blob/26281792e92ae24ec7a14b11e8df8fbacf9c4882/tests/python/relay/test_pass_fold_constant.py#L67
>From the above test case, it seems the pass transform.FoldConstant can't 
>support these optimization, anybody know why we won't implement these 
>optimization, does these optimization shouldn't be done? or we just don't have 
>time to do them?
Thanks.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/relay-cant-merge-2-constant-add-operator/10415/1)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/23295b2bd4c88f7666938dc4c7cf3e123160a7180f391c6494651fc3bae36875).

Reply via email to