I would like to deploy my network model on browser and perform inference with 
WebAssembly on client side and there are several examples that relate to my 
intention. However, none of these example help me successfully build the app.

First I notice that there is a web app 
[example](https://github.com/tqchen/tvm-webgpu-example) related webgpu. 
Unfortunately, I find out it causes an error if the app is running on a PC 
without GPU. Then I try to modify it into CPU version.  Following is the 
comparison between original version and my version. However, the JS code is 
still not working and keep showing error message that function signature 
mismatching during `inst.systemLib()` function call.

      # Original JS code in index.html
      const inst = new tvmjs.Instance(new WebAssembly.Module(wasmSource), new 
EmccWASI());
      const gpuDevice = await tvmjs.detectGPUDevice();
      if (gpuDevice === undefined) {
        logger(
          "Cannot find WebGPU device, make sure you use the browser that 
suports webGPU"
        );
        return;
      }
      inst.initWebGPU(gpuDevice);
      inst.systemLib()
      const graphJson = await (await fetch("./" + network + ".json")).text();
      const synset = await (await fetch("./imagenet1k_synset.json")).json();
      const paramsBinary = new Uint8Array(
        await (await fetch("./" + network + ".params")).arrayBuffer()
      );
      logger("Start to intialize the classifier with WebGPU...");

      # Modified CPU version
      const inst = new tvmjs.Instance(new WebAssembly.Module(wasmSource), new 
EmccWASI());
      const graphJson = await (await fetch("./" + network + ".json")).text();
      const synset = await (await fetch("./imagenet1k_synset.json")).json();
      const paramsBinary = new Uint8Array(
        await (await fetch("./" + network + ".params")).arrayBuffer()
      );
      ctx = inst.cpu(0);
      const syslib = inst.systemLib(); //! something goes wrong here

Due to the strong connection between rust and wasm, I also try to generate wasm 
from rust. Referring the recent PR about [test 
code](https://github.com/apache/incubator-tvm/tree/c7a16d892da52f931a259a406922238a5e3e4f96/rust/runtime/tests/test_wasm32),
 I successfully compiled my model with `llvm -target=wasm32-unknown-unknown 
--system-lib` flag. Yet, the generated wasm executable doesn't work. Also, even 
the `test_wasm32` isn't working in wasmtime WebAssembly runtime.

I wonder whether I am on the right direction or if there is any documentation 
about deploying model on browser.





---
[Visit 
Topic](https://discuss.tvm.ai/t/how-to-build-runnable-wasm-on-browser/7048/1) 
to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/ec95f2d3b4e9d69aa34acbcb8939b37d28a87085fc78b5d24966aa6c56222f58).

Reply via email to