[TVM Discuss] [Questions] Can TVM now support batched inference? Autotvm runs twice as long as tensorflow

2020-04-20 Thread adobay via TVM Discuss
Thanks, @kevinthesun, @comaniac --- [Visit Topic](https://discuss.tvm.ai/t/can-tvm-now-support-batched-inference-autotvm-runs-twice-as-long-as-tensorflow/6405/9) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click here](http

[TVM Discuss] [Questions] Can TVM now support batched inference? Autotvm runs twice as long as tensorflow

2020-04-19 Thread adobay via TVM Discuss
Thanks @comaniac, with batch size 500 and ```llvm -mcpu=haswell -libs=cblas```, compared with tensorflow, tvm gets 2~3X performance improvement. But the graph tuner will still throw an exception. https://github.com/apache/incubator-tvm/issues/5369 --- [Visit Topic](https://discuss.tvm.ai/

[TVM Discuss] [Questions] Can TVM now support batched inference? Autotvm runs twice as long as tensorflow

2020-04-17 Thread adobay via TVM Discuss
Thank you very much. Tonight I will try what you said. The graph tuner throwed an exception, so i only tuned each op... --- [Visit Topic](https://discuss.tvm.ai/t/can-tvm-now-support-batched-inference-autotvm-runs-twice-as-long-as-tensorflow/6405/5) to respond. You are receiving this bec

[TVM Discuss] [Questions] Can TVM now support batched inference? Autotvm runs twice as long as tensorflow

2020-04-17 Thread adobay via TVM Discuss
My model does not contain conv2d, the most time-consuming op is nn.dense. Do you mean using optimized history to build the relay using batch 500 and then do inference? --- [Visit Topic](https://discuss.tvm.ai/t/can-tvm-now-support-batched-inference-autotvm-runs-twice-as-long-as-tensorflow

[TVM Discuss] [Questions] Can TVM now support batched inference?

2020-04-16 Thread adobay via TVM Discuss
I have a tensorflow model. The cpu inference performance is poor when the batch is 500 online. After using TVM optimization, the performance of 500 times is much worse than tensorflow. Can TVM support batch inference? --- [Visit Topic](https://discuss.tvm.ai/t/can-tvm-now-support-batched-

[TVM Discuss] [Questions] WARNING:root:Failed to download tophub package for llvm:

2020-04-14 Thread adobay via TVM Discuss
I faced this warning too --- [Visit Topic](https://discuss.tvm.ai/t/warningfailed-to-download-tophub-package-for-llvm-urlopen-error-errno-111-connection-refused/5759/4) to respond. You are receiving this because you enabled mailing list mode. To unsubscribe from these emails, [click her