On 2/2/26 10:00 AM, Stefan Hajnoczi wrote:
On Mon, Feb 02, 2026 at 02:50:28AM -0500, Michael S. Tsirkin wrote:
On Sun, Feb 01, 2026 at 05:24:03PM -0800, Pierrick Bouvier wrote:
On 1/31/26 9:48 AM, Michael S. Tsirkin wrote:
On Fri, Jan 30, 2026 at 06:00:56PM -0800, Pierrick Bouvier wrote:
Signed-off-by: Pierrick Bouvier <[email protected]>

Performance impact?

the reason we have these is for performance ...


I would be very happy to run any benchmark that you might judge critical.

Should we run a disk read/write sequence with a virtio disk, or a download
with a virtio interface? Any other idea?

block for sure, people who care about network perf go the vhost or
vhost user path.

So I CC'd Stefan Hajnoczi. Stefan do you feel this needs a test and what
kind of test do you suggest as the most representative of I/O overhead?

This command-line lets you benchmark virtio-blk without actual I/O
slowing down the request processing:

   qemu-system-x86_64 \
       -M accel=kvm \
       -cpu host \
       -m 4G \
       --blockdev 
file,node-name=drive0,filename=boot.img,cache.direct=on,aio=native \
       --blockdev null-co,node-name=drive1,size=$((10 * 1024 * 1024 * 1024)) \
       --object iothread,id=iothread0 \
       --device virtio-blk-pci,drive=drive0,iothread=iothread0 \
       --device virtio-blk-pci,drive=drive1,iothread=iothread0

Here is a fio command-line for 4 KiB random reads:

   fio \
       --ioengine=libaio \
       --direct=1 \
       --runtime=30 \
       --ramp_time=10 \
       --rw=randread \
       --bs=4k \
       --iodepth=128 \
       --filename=/dev/vdb \
       --name=randread

This is just a single vCPU, but it should be enough to see if there is
any difference in I/O Operations Per Second (IOPS) or efficiency
(IOPS/CPU utilization).

Stefan

I'll reply on v3 to keep conversation there.

Thanks,
Pierrick

Reply via email to