Sorry about that, I have reupdated the report and add note about the
changes.
Tianqi
On Thu, May 30, 2019 at 8:11 PM Justin Mclean wrote:
> Hi,
>
> I notice that in adding your report [1] you manage to duplicate a large
> section of the report and mess up the formatting of some of the other
> r
Hi,
I notice that in adding your report [1] you manage to duplicate a large section
of the report and mess up the formatting of some of the other reports. I've had
to revert these changes, if you could add your report again, but this time take
a little more care in doing so, that would be appre
> We can certainly start with symmetric to flush the flow, while keeping in
> mind that we can share as much infrastructure as possible between them.
All the tflite quantized models I've tested use the asymmetric uint8
quantization. If you are planning to use those as examples, it will be hard
I would suggest to design the infrastructure that supports both
symmetric/asymmetric quantization. We can certainly start with symmetric to
flush the flow, while keeping in mind that we can share as much infrastructure
as possible between them.
> * namespace for the tflite quantize style dialec
Regarding @jnorwood 's comments on output min/max of conv2d.
Your observations about the **values** of output min max are correct. But they
are still activations. As I always try to deliver is that, the INT8 values in
quantization are representing FP32 values.
When we talking about ReLU6 activa
Some comments for @anijain2305 's
[reply](https://github.com/dmlc/tvm/issues/2351#issuecomment-496998142) :)
> > Hi @anijain2305 regarding the requantization, if the it is not going to put
> > in conv op, the op may suppose to output FP32, otherwise the semantic is
> > confusing. The requantiza