You can try to use batch 1 for tuning and 500 for inference. The time should be 
just around (batch size) * (single batch inference time). Current TVM HCHW/NHWC 
conv2d does not tune the batch size, but some work is ongoing.





---
[Visit 
Topic](https://discuss.tvm.ai/t/can-tvm-now-support-batched-inference-autotvm-runs-twice-as-long-as-tensorflow/6405/2)
 to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.tvm.ai/email/unsubscribe/acf7ed88147645f7a6486d1512f5cea79d5d43740b4f59ebce86a3e7535ff41d).

Reply via email to