srcarroll wrote:

> I'm -1 on using `tensor.reshape` op. IMO, we should only use 
> tensor.expand/collapse_shape; they work much better with existing 
> transformations.
> 
> Out of curiosity, what use case do you have in mind? Why do we lower fully 
> dynamic pack op? If it is at high level graph level, we can just use 
> `tensor.pack` which carries more meaningful information. If it is at low 
> level stage (e.g., around vectorization), I think the inner tile sizes should 
> already be resolved to static values? In this context, we can still use 
> `tensor.expand_shape`. It supports the case where one dynamic extent can be 
> expanded into a single dynamic extent and other static extents (e.g., `? -> 
> ?x4`).

I'll admit I dont know the use cases here. I worked on the `lower_unpack` 
transform to support dynamic sizes because someone in discord said they needed 
it. And then i saw the NYI comment for `lower_pack` so thought I'd do it
```
"non-static shape NYI, needs a more powerful tensor.expand_shape op"
```
If you never want to support dynamic tiles, then fine by me. But this shouldn't 
be a NYI comment if you never intend to support it.

https://github.com/llvm/llvm-project/pull/76003
_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to