Thanks, @kevinthesun, @comaniac
---
[Visit
Topic](https://discuss.tvm.ai/t/can-tvm-now-support-batched-inference-autotvm-runs-twice-as-long-as-tensorflow/6405/9)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
here](http
Thanks @comaniac, with batch size 500 and ```llvm -mcpu=haswell -libs=cblas```,
compared with tensorflow, tvm gets 2~3X performance improvement. But the graph
tuner will still throw an exception.
https://github.com/apache/incubator-tvm/issues/5369
---
[Visit
Topic](https://discuss.tvm.ai/
Thank you very much. Tonight I will try what you said. The graph tuner throwed
an exception, so i only tuned each op...
---
[Visit
Topic](https://discuss.tvm.ai/t/can-tvm-now-support-batched-inference-autotvm-runs-twice-as-long-as-tensorflow/6405/5)
to respond.
You are receiving this bec
My model does not contain conv2d, the most time-consuming op is nn.dense. Do
you mean using optimized history to build the relay using batch 500 and then do
inference?
---
[Visit
Topic](https://discuss.tvm.ai/t/can-tvm-now-support-batched-inference-autotvm-runs-twice-as-long-as-tensorflow
I have a tensorflow model. The cpu inference performance is poor when the batch
is 500 online. After using TVM optimization, the performance of 500 times is
much worse than tensorflow. Can TVM support batch inference?
---
[Visit
Topic](https://discuss.tvm.ai/t/can-tvm-now-support-batched-
I faced this warning too
---
[Visit
Topic](https://discuss.tvm.ai/t/warningfailed-to-download-tophub-package-for-llvm-urlopen-error-errno-111-connection-refused/5759/4)
to respond.
You are receiving this because you enabled mailing list mode.
To unsubscribe from these emails, [click
her