[quote="electriclilies, post:21, topic:9775, full:true"]
@mikeseven
Yes, the goal is to create a fully quantized graph, and we do recognize that
this transformation will change the output of the graph. For this reason, we're
not going to present the rewrite as a Relay pass. And I definitely agr
I'd like to make sure the end goal of this framework is to create a fully
quantized graph, ie with all operators in affine space.
Unlike the usual transformation contraint in TVM that graph rewrite doesn't
change outcome, for quantization, it obviously does. Statistics must be
available to he
Thanks for splitting the proposal.
Replying about F1
Yes we can generate multiple libraries but the issue is linking them together.
Specifically, there is no way to differentiate inputs vs params/weights. There
is no way to know the name of the outputs as they have been mangled after
simplifi
In scenarios where multiple models are used back to back, with multiple inputs
and outputs, TVM doesn't produce helpful native libraries to connect them:
- `get_num_inputs()` returns all tensors instead of only the inputs of the model
- `get_output(id)` has no support for strings. And since out
ScatterND is used in TFLite and ONNX, notably on Yolo v5 models.
@jainris asked for it in August too but it doesn't seem to be available in
latest code.
Is there any plan to support it?
Thanks,
--mike
---
[Visit Topic](https://discuss.tvm.apache.org/t/scatternd-missing/8292/1) to
respo
I suppose CSE would solve my question earlier where 2 identical adds with the
same tensor shapes where not simplified as 1 add with both tensors added?
---
[Visit Topic](https://discuss.tvm.apache.org/t/rfc-cse-optimization/8130/7) to
respond.
You are receiving this because you enabled ma