[quote="namduc, post:6, topic:10889"]
after that i got the same result with a model onnx running on onnxruntime
[/quote]
It might be that onnxruntime was able to use hardware resources the most 
efficient way and improving of the inference time more is possible but might be 
hard. And TVM get the same perfect result. It's hard to say without looking 
into the model. Is it publically available model? Does it have more conv layers 
or matmul/dense?

[quote="namduc, post:6, topic:10889"]
and the results didn’t improve much
[/quote]
Is it the same model as in the beginning? If it is the same - there is a 
progress comparing to different tvm results. From 3.4s to 0.5 seconds. As for 
quote of tsv file - I see a part from 10000 trials to 20000. Probably tuning 
had to be stopped on that 1000th trial or early. At the same time I am pretty 
sure that if we take to the first lines, results should be improved 
significantly during tuning.





---
[Visit 
Topic](https://discuss.tvm.apache.org/t/compiling-model-with-target-llvm-not-faster/10889/7)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.apache.org/email/unsubscribe/a79be5c4b3aeb6108ce1de2cce108ca83738404308220e20b2d1290d0d03c8e3).

Reply via email to