Thanks @areusch . I finally got this working.
However, right now I am giving the parameter using set_input(**params) to the
runtime. I tried using --link params while building, but then I get RPC session
timeout error @ mod.run()
---
[Visit
Topic](https://discuss.tvm.apache.org/t/measuri
Hey @areusch It seems like I am unable to set the lowed model parameters in to
graph_mod, which is why I am getting this error. I am able to run this
sine_model https://tvm.apache.org/docs/tutorials/micro/micro_tflite.html using
similar steps but when passing the lowered parameters from my mo
@areusch
I tried to reduce the model size, however I am getting this error
> SessionTerminatedErrorTraceback (most recent call last)
> in
> 6
> 7 # Set the model parameters using the lowered parameters produced
> by `relay.build`.
> > 8 graph_mod
Thanks @areusch . Unfortunately increasing the memory size did not work. The
other thing I tried is to replace my
/tvm/tests/micro/qemu/zephyr-runtime/src/main.c with the "main.c" from
blog-post eval which did not work either. Is there a working microTVM code
apart from sine model on the same
@areusch I am trying to run a different tflite model from the tutorial given by
@tgall_foo . I am getting error while running.
> RPCError Traceback (most recent call
last)
> in
> 2 with tvm.micro.Session(binary=micro_binary, flasher=flashe
Thanks @areusch This works. :slight_smile:
---
[Visit
Topic](https://discuss.tvm.apache.org/t/measuring-utvm-inference-time/9064/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](https://discuss.tvm.apache.org/ema
Thanks @areusch. I tried your solution but getting this error.
>
> TypeError Traceback (most recent call last)
> in
> 1 from tvm.micro import session
> > 2 session.create_local_debug_runtime(graph,graph_mod,ses.context)
>
> ~/tvm_micro_with_debugger/t
I would like to know if there is a way to deploy the TVM module to a cluster of
hardware devices for performance gain. Example multiple CPUs/GPUs. It can be a
homogeneous cluster.
---
[Visit Topic](https://discuss.tvm.apache.org/t/deploy-model-on-cluster/8516/1)
to respond.
You are recei
I believe ONNX is an exchange format to port DL models from one frame work to
another. TVM is more like a DL compiler. It compiles the DL model based on
different frameworks for different target hardwares. During compilation it also
tries to optimize using scheduling and various other techniq