Ok so I commented out the `tune_graph()` call and use `sch_log` which is the
schedule from `tune_kernels()`
```
with autotvm.apply_graph_best(sch_log):
logging.info("Compiling the schedule")
with relay.build_config(opt_level=3):
graph, lib, params = relay.build_module
Thanks! I'll give that a try.
---
[Visit
Topic](https://discuss.tvm.ai/t/autotvm-task-extract-from-program-in-tflite/6578/19)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.ai/email/unsubscribe
Thanks for sharing. The failure is while calling tune_graph. The graph tuning
assumes the data to be float32.
Additionally, last time I tried, the graph tuning cant work with QNN ops. One
way to handle this is to call QnnCanonilcalize
(python/tvm/relay/qnn/transform.py) before calling graph tu
Here is the script that reproduces the issue:
```
import sys
import os
import tvm
from tvm import relay
from tvm import autotvm
from tvm.autotvm.tuner import XGBTuner, GATuner, RandomTuner, GridSearchTuner
from tvm.autotvm.graph_tuner import DPTuner, PBQPTuner
import tflite.Model
#
# This fun
Right now, I think the process fails in `relay.build_module.build(mod,
target=target, params=params)` That is after the code I showed above. I just
verified that the layout transformation takes place by comparing the both
`relay_NHWC.txt` and `relay_NCHW.txt`.
Let me create a minimal script s
Hmm, this is weird. My script seems to work well. Is it possible for you to
share the script? If not, can you reach the printing on relay_NHWC.txt for
quantized model, or it fails before that?
---
[Visit
Topic](https://discuss.tvm.ai/t/autotvm-task-extract-from-program-in-tflite/6578/15)
I have double checked the type and made sure the NHWC -> NCHW is applied:
```
assert input_type == "uint8", "Quantized models use uint8 input_type"
mod, params =\
relay.frontend.from_tflite(tflite_model,
shape_dict={input_name: dshape},
[quote="alopez_13, post:7, topic:6578"]
This is part of the Relay code:
```
%0 = layout_transform(%input, src_layout="NHWC", dst_layout="NCHW");
%1 = layout_transform(%v_param_1, src_layout="HWIO", dst_layout="OIHW");
%2 = qnn.conv2d(%0, %1, 128, 122, 0.0078125f, 0.0339689f, strides=[2, 2]
Just to confirm, can you please double check your script?
We specify input shape and dtype for the model while parsing (`from_tflite`).
So, even though most of the AutoTVM script can be same, there needs to be a
small change while passing on the input shape and dtype for FP32 and quantized
mo
IIUC, simple compilation (no auto-tuning) of both FP32 and quantized models
work.
But, the auto-tuning + compilation fails for quantized model (while the same
script works for FP32), right?
---
[Visit
Topic](https://discuss.tvm.ai/t/autotvm-task-extract-from-program-in-tflite/6578/11)
t
@anijain2305 Thanks for the prompt reply. Yes I am setting `dtype_input =
"uint8"` Also I just verified that optimization of a non-quantized TFlite model
does work. In summary, the same optimization script will work for an FP32
version but not for a quantized version. Both models come from
h
Are you giving the right input dtypes to the model. Tflite quantized models
need `uint8` dtype.
---
[Visit
Topic](https://discuss.tvm.ai/t/autotvm-task-extract-from-program-in-tflite/6578/9)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from th
I'm not familiar with the QNN module so I'm calling @anijain2305 for help.
I would suggest opening another topic with a proper title for a new problem
next time; otherwise it's easy to be ignored.
---
[Visit
Topic](https://discuss.tvm.ai/t/autotvm-task-extract-from-program-in-tflite/6578/8
After trying multiple quantized models the schedule is finally produced. For
testing purposes I am using the quantized models for MobileNetV2 from
https://www.tensorflow.org/lite/guide/hosted_models However, now I get at
least two kinds of errors when generating the binary:
```
an internal in
Ok found the error, the model I was ingesting was not the correct one. With the
correct model the above problem does not show up.
Sorry for that!
---
[Visit
Topic](https://discuss.tvm.ai/t/autotvm-task-extract-from-program-in-tflite/6578/6)
to respond.
You are receiving this because you
Thank you, now that you mention it, it does make sense.
---
[Visit
Topic](https://discuss.tvm.ai/t/autotvm-task-extract-from-program-in-tflite/6578/5)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss
So your model is already in NCHW layout? From the log it seems the model is
still in NHWC. You can see whatever the selected implementation (e.g.,
`conv2d_nhwc.x86`) and warning (e.g., NHWC layout is not optimized for x86) are
all about the NHWC layout. You may need to check if the layout conv
Thanks!, I forgot to add that I did the `ConvertLayout` pass before to
transform into NCHW:
```
mod, params =\
relay.frontend.from_tflite(tflite_model,
shape_dict={input_name: dshape},
dtype_dict={input_name: input_ty
Looks like your model is in NHWC layout, but TVM now supports NCHW layout
better and AFAIK TVM doesn't have a tunable template for NHWC layout in X86.
You may need to use `ConvertLayout` pass to transform your model to NCHW layout
and then extract tasks.
---
[Visit
Topic](https://discuss
I was trying to optimize a TFLite graph using
`autotvm.task_extract_from_program` but the method returns an empty list. To be
precise:
```
tasks = autotvm.task.extract_from_program(mod["main"], target=target,
params=params, ops=target_op)
```
I have tri
20 matches
Mail list logo