Thanks @comaniac, with batch size 500 and ```llvm -mcpu=haswell -libs=cblas```, 
compared with tensorflow, tvm gets 2~3X performance improvement. But the graph 
tuner will still throw an exception.
https://github.com/apache/incubator-tvm/issues/5369





---
[Visit 
Topic](https://discuss.tvm.ai/t/can-tvm-now-support-batched-inference-autotvm-runs-twice-as-long-as-tensorflow/6405/6)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/49adf097b166c594d169063a66c5ea788a64e0ed4ec608024f41bf621b2f6a67).

Reply via email to