Hi @tqchen, Thanks a lot for you comments.
Actually, I understand the first part of your comment, but I am afraid I don't follow the rest :slight_smile: Just to fully understand: - About adding 0.5(factor) to the bias, what do you mean? The bias is added before the requantization (as an int32) right? Do you mean to incorporate the bias addition within the `fixed_point_multiply()` - About the comment on legalization: do you mean trying to intercept the fixed point multiplication during the legalization pass? A different implementation would be to have the `fixed_point_multiply()` as a topi operator (instead of an intrinsic), and then invoking add/multiply/shift there (i.e., inside the compute). That operator could be overridden for the specific target (i.e., arm) to use LLVM intrinsics. What do you think? --- [Visit Topic](https://discuss.tvm.ai/t/rfc-using-arm-intrinsics-to-implement-fixed-point-multiplication-in-tvm/7150/5) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](https://discuss.tvm.ai/email/unsubscribe/c5cc8b0a14ea4aa26a6ececa50d3c7fd16fe19aab2b33c340dbfec38015a0b92).