I've checked the commit you mention above. According to my understanding, the 
commit focus on giving a example demonstrating how to save customized op and 
load it until runtime. Despite the use case is slightly different than mine, I 
believe the building pipeline still works on my case. In conclusion, manually 
modifying `Cargo.toml` and build `libtvm_runtime.a` beyond official pipeline 
doesn't make sense.

Anyway according to my recent work, I believe in TVM documentation still lack 
one simple example just like 
[from_torch](https://tvm.apache.org/docs/tutorials/frontend/from_pytorch.html) 
illustrating the steps to deploy model on browser. I'll keep trying to achieve 
my goal and please let me know if you know there is any example code or 
solution. Thank you very much.





---
[Visit 
Topic](https://discuss.tvm.ai/t/how-to-build-runnable-wasm-on-browser/7048/13) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/10fb5b47b909a65946afae6a43011e6040bbbf1b97d1adb8c1c138b2b3629058).

Reply via email to