By pattern annotation:
qnn.op.conv2d+nn.bias_add+qnn.op.requantize, it corresponds to a complete INT8 
CONV with bias layer in Caffe.
The code is too specific to the target hardware and I don't think it's more 
valuable than a generalized float32 conversion one.
However, for fixed point accelerator, it should be always a requantize op 
followed at the end.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/parsing-relay-subgraph-with-composite-function-under-byoc/9407/6)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/a1874558036d9322e3615296fd948dae134f8bed645da926d5764c0a151a8629).

Reply via email to