We should be aware that if we disable tuple fusion when tuple is the return 
value, we might lose some efficiency gain.

So this function 
```
fn (%x: Tensor[(64, 64), float32]) -> (Tensor[(32, 64), float32], Tensor[(32, 
64), float32]) {
  %2 = fn (%p0: Tensor[(64, 64), float32], __dict__=meta[StrMap][0]) -> 
(Tensor[(32, 64), float32], Tensor[(32, 64), float32]) {
    %0 = strided_slice(%p0, begin=[0, 0], end=[32, 64], strides=[1, 1])
    %1 = strided_slice(%p0, begin=[32, 0], end=[64, 64], strides=[1, 1])
    (%0, %1)
  }
  %2(%x)
}
```

will become this
```
fn (%x: Tensor[(64, 64), float32]) -> (Tensor[(32, 64), float32], Tensor[(32, 
64), float32]) {
  %0 = fn (%p0: Tensor[(64, 64), float32], __dict__=meta[StrMap][0]) -> 
Tensor[(32, 64), float32] {
    strided_slice(%p0, begin=[0, 0], end=[32, 64], strides=[1, 1])
  }
  %1 = %0(%x)
  %2 = fn (%p01: Tensor[(64, 64), float32], __dict__=meta[StrMap][1]) -> 
Tensor[(32, 64), float32] {
    strided_slice(%p01, begin=[32, 0], end=[64, 64], strides=[1, 1])
  }
  %3 = %2(%x)
  (%1, %3)
}
```

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/dmlc/tvm/issues/3039#issuecomment-484373366

Reply via email to