I have deployed some models in RK 3288, which completed in for example 5ms. But 
I found that the TVM threads seems consume CPU continuously after the model 
inference completed. I tested by the following code:

    tvm::runtime::Module mod = 
(*tvm::runtime::Registry::Get("tvm.graph_runtime.create"))(json_data, 
mod_syslib, device_type, device_id);
    tvm::runtime::PackedFunc set_input = mod.GetFunction("set_input");
    set_input("data", data);
    tvm::runtime::PackedFunc load_params = mod.GetFunction("load_params");
    load_params(params);
    tvm::runtime::PackedFunc run = mod.GetFunction("run");
    while (True)
    {
        run();
        usleep(100000); // in microsecond
    }

I used 4 tvm threads. The inference run() will be continued 5ms, then sleeped 
100ms. But I used top cmd and saw that the cpu occupation is about 75%, use top 
-H saw that there were 3 threads occupied cpu core continuesly (RK3288 has 4 
arm core in total).

![image|591x82](upload://wcA1fHEQP92bxa8HrmYXmzitXAp.png) 

My question is how to sleep the threads in time without release the 
tvm::runtime::Module mod.





---
[Visit 
Topic](https://discuss.tvm.ai/t/do-tvm-runtime-threads-occupy-cpu-persistently-how-to-sleep-them-in-time/6178/1)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/1279bf184ffe3743a05748ff5c4268fcebb572f598d65cec881ad0ef144bf046).

Reply via email to