Re: [dmlc/tvm] [RFC][Quantization] Support quantized models from TensorflowLite (#2351)

2019-07-07 Thread ds-jnorwood
tflite computes the downscale and right shift integer parameters from a double input as they do in the call to Quantize Multiplier ` https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/micro/kernels/conv.cc ` > double real_multiplier = 0.0; > TF_LITE

Re: [dmlc/tvm] [RFC][Quantization] Support quantized models from TensorflowLite (#2351)

2019-07-07 Thread Zhao Wu
> > slight difference in a single point(0.5) is fine and likely won’t have an > > impact on final acc > > Yeah, I was planning to add a rounding param to the op. For "ceil", we could > just add a 0.5 rounding without worrying about negative values. For "round', > we can be more precise. By defa

Re: [dmlc/tvm] [RFC][Quantization] Support quantized models from TensorflowLite (#2351)

2019-07-07 Thread Zhao Wu
Let me try to follow your discussion compared with our internal implementation, for round (of requantize), when we get the `input_scale` / `kernel_scale` / `output_scale`, we want to get the `shift` / `multiplier` (See: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/kernels

Re: [dmlc/tvm] [RFC][Quantization] Support quantized models from TensorflowLite (#2351)

2019-07-07 Thread Tianqi Chen
In other cases that do not directly correspond to 0.5, the behavior is still consistent with round, if you add 0.5, this include negative values. Because right shift corresponds to floor division. -- You are receiving this because you are subscribed to this thread. Reply to this email directly

Re: [dmlc/tvm] [RFC][Quantization] Support quantized models from TensorflowLite (#2351)

2019-07-07 Thread Animesh Jain
> slight difference in a single point(0.5) is fine and likely won’t have an > impact on final acc Yeah, I was planning to add a rounding param to the op. For "ceil", we could just add a 0.5 rounding without worrying about negative values. For "round', we can be more precise. By default, we can

Re: [dmlc/tvm] [RFC][Quantization] Support quantized models from TensorflowLite (#2351)

2019-07-07 Thread Tianqi Chen
slight difference in a single point(0.5) is fine and likely won’t have an impact on final acc -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/2351#issuecomment-509016439

Re: [dmlc/tvm] [RFC][Quantization] Support quantized models from TensorflowLite (#2351)

2019-07-07 Thread Animesh Jain
> One thing to be careful about is that when using shift and normalize, right > shift corresponds to round down as opposed to round to nearest, an additional > 0.5 equivalence needs to be added to get the round behavior Yes, I think it is little more complicated. The std::round of -2.5 is -3.

Re: [dmlc/tvm] [RFC][ARITH] Introduce FloorDiv/Mod for Context-Independent Simplifications (#3478)

2019-07-07 Thread Wei Chen
I also prefer floordiv as its use in MLIR. -- You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub: https://github.com/dmlc/tvm/issues/3478#issuecomment-509013801

Re: [dmlc/tvm] [RFC][Quantization] Support quantized models from TensorflowLite (#2351)

2019-07-07 Thread ds-jnorwood
> an additional 0.5 equivalence needs to be added to get the round behavior if followed by relu, you can skip extra round processing for negative values. otherwise for negative values you need to subtract 0.5 equivalent. if using convergent nearest/even rounding, also need to handle the boundary c