[Apache TVM Discuss] [Development] AXIS_SEPARATOR in Relax Tensor

2023-07-26 Thread abhikran-quic via Apache TVM Discuss


Thank you so much @tqchen @slyubomirsky @sanirudh for your detailed inputs on 
this!

I have some thoughts and questions listed below. I also discussed them with 
@sanirudh. It would be great if we can address them in the next unity meeting. 
I'll add this topic to the agenda.

1. Axis separator will be set by `AlterOpImpl`  pass and hence, it can embed 
`R.memory_axis_separator`  for each tensor.

2. IIUC, `FlattenLowAxisSepDimensions` should be invoked to before all TIR 
passes. Now, if a TIR pass depends on logical shape of a tensor, then how 
should we handle such a scenario ? Should the logical shape be stored/saved in 
an attribute so that TIR passes can run smoothly ?

Thanks.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/axis-separator-in-relax-tensor/15385/8) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/8b9f182e1df6e986ee1ab9e4f3a297023ec0ed0fb32a9849626d9b4f00d36d87).


[Apache TVM Discuss] [Development] AXIS_SEPARATOR in Relax Tensor

2023-07-27 Thread abhikran-quic via Apache TVM Discuss


Thank you @tqchen!
I will try this approach and update here if I have further questions.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/axis-separator-in-relax-tensor/15385/10)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/8ed0225b52387d79ac11fbf21499621c0a1ed13ff5e86dea0f03268298f737de).


[Apache TVM Discuss] [Development] AXIS_SEPARATOR in Relax Tensor

2023-08-09 Thread abhikran-quic via Apache TVM Discuss


Hi @tqchen ,

While working on the solution, I see two more problems that I'd like to discuss 
here:

1. No Control over flattening of buffers: If a buffer is flattened via 
`FlattenLowAxisSepDimensions` , then it shouldn't be flattened via TIR passes 
like `FlattenBuffer` or `FlattenStorage`. Is it possible to avoid flattening in 
TIR passes ?
2. Propagating axis_separator information across passes: If a pass executed 
between `AlterOpImpl`(which introduces `R.memory_axis_separator` for each 
tensor)  and `FlattenLowAxisSepDimensions` introduces new relax operators, the 
new operators need to propagate axis_separators information across the graph. 
This would mean changing multiple passes or ensuring minimum passes execute 
between `AlterOpImpl` and `FlattenLowAxisSepDimensions`.

The reason we want to use axis_separator information across graph is for memory 
allocation of N-d vs 1-d buffer.

One possible solution to accomplish the goal: if axis_separator is added to the 
arguments of replacement functions(used in `AlterOpImpl`), an analysis pass can 
be invoked before memory planning to identify the axis_separators in the graph 
and collate the information in a data structure which can be used by memory 
planning pass to allocate memory. 

Please share your thoughts/comments on this.

Thank you!





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/axis-separator-in-relax-tensor/15385/11)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/b814a0a5bfd6caa51fe25f906fd8846302fdbc82aa0ad8b55ec0394234caf030).