Hi all, 

In my effort to accelerate AArch64 through tensorization, I incurred into an 
issue. 

Basically, I am padding my input tensor, to let `tensorize` work (I need rows 
to be multiple of 4 and cols to be multiple of 16). 

However, bound inference removes padding (since it is not used) and, when I 
tile the computation,  `tir.likely` statements appear. This results in 
`tensorize` producing the error:

```
TVMError: Tensorize failed, split condition tir.likely(((...)) relies on var 
defined inside tensorize scope
```

One solution is to add a (sufficiently complex) zero multiplication by a 
padding element, to trick the bound inference (see for example 
[here](https://github.com/apache/incubator-tvm/blob/34647ed8defcc52e4d5a173feb577caf35df7a82/python/tvm/topi/bifrost/conv2d.py#L346-L350)).
 

However, this is very hacky and it is not supposed to last (as the bound 
inference gets smarter, it might detect that the added element is zero). 

The question is: should we try to come up with a "good" solution for this? 

One idea might be to let tensorize accept `@tir.likely` statements and replace 
them with a "variable size" tensorization which will be provided by the 
developer. 

For instance, we might add a `_intrin_func_variable` private function that gets 
called only when a *variable tensorization* (i.e., a tensorization over 
`@tir.likely`) is needed. 

I have also read through [this 
post](https://discuss.tvm.ai/t/loop-partitioning-and-tensorization-work-on-different-ir-levels/876),
 but it doesn't seem to arrive to a concrete solution. 

Any ideas? 

@anijain2305, @FrozenGene, @matt-arm, @tqchen , @ramana-arm





---
[Visit 
Topic](https://discuss.tvm.ai/t/loop-partitioning-padding-and-tensorization/7753/1)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/cbe5f21debb1eca5c6009968242d48ca5cced3335b41b4b2351bc5e2db4fbf3c).

Reply via email to