If some how we make the llama model static instead of dynamic is it possible to 
run it using the graph executor after converting to relay module?
I have been through a documentation of relax it is possible there but I need to 
run it with relay instead.
Any suggestion or guideline would be greatly appreciated.
Thank You!





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/is-it-possible-to-run-tiny-llama-using-relay/18070/1)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/6b778aa239a0fd4ba31171bc55e73b434c9fd3eb085c9f6953b774e8898677f6).

Reply via email to