This is just a follow up. In the mlperf tiny repo in 
benchmark/training/keyword_spotting there
is a python script make_bin_files.py that can be adapted to prepare the input 
for the models found in
the mlperf tiny kws benchmark. Since the preparation applies some filters on the
input wav files, a direct conversion back from the npy arrays to the wav files 
is not possible.

I managed to get successful inference runs using microTVM with kws in the 
quantized (int8) version.
However in the float32 version, the generated aot project does not classify the 
samples correctly.
What could be the reason for that?

With the visual wake words VWW benchmark from mlperf tiny, I have tried the 
same.
In this case, both the int8 version and the float32 version work well with 
microTVM.
Also converting the input back to the original photos is also possible, because 
no
filters are applied.

Best regards,
Benedikt





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/microtvm-mlperf-tiny-input-data/18006/2)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/2d698c0cce5cd16ba88c780f47ca26b79091fbe40e5b47611b1c094d6d659193).

Reply via email to