Introducing fixed point mulitply in the tir seems to be a quite overkill, given 
that most of the operator itself can be expressed by the basic integer 
arithmetics, would it be easier to detect the pattern (of multipy shift and 
round) and rewrite into the fixed point multiply?

Notably, we can also directly add `0.5`(factor) to the bias for so we can 
directly use the round down behavior in the right shift.

I wonder if we can apply better legalization in the QNN to get around the 
issue(e.g. use int32 when possible) without having to bring the primitive to 
the TIR level cc @anijain2305





---
[Visit 
Topic](https://discuss.tvm.ai/t/rfc-using-arm-intrinsics-to-implement-fixed-point-multiplication-in-tvm/7150/4)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/0b178e9ca6d389ed58e39ee5c78c1a6a532f44fe132c5d04af5a84f41b1be425).

Reply via email to