Hi all,

I'm currently working on an NVMe/TCP driver for the QEMU block layer. I'm mostly done with basic functionality and now working on the performance side of things. There is a lot of optimization to be done with regards to performance with a single NVMe i/o queue pair, but I want to work on introducing more queue pairs first.

Since the multi-threading capabilities of QEMU have expanded since the NVMe/PCI driver was implemented a few years ago, I would like to make as much use of them as I can. The ideal would be to have a dedicated NVMe i/o queue pair for every thread executing i/o, which are in turn pinned to host cores, inspired by SPDK's NMVe driver. Or, at least, have something like this be user-configurable like with virtio-blk. Is that even possible? If yes, could you point me to some documentation or example code on how to achieve this?

If you want to try the block driver, the code is at
    https://github.com/phschck/qemu
and you can use it by adding one of
    -drive driver=nvme-tcp,ip=x.x.x.x,port=x,subsysnqn=nqn.xxx...
    -drive file=nvme-tcp://x.x.x.x:x/nqn.xxx...
to your invocation. Though it might not work on every setup yet (I tested it on x86_64 and with SPDK's nvmf target) and booting from it will take a while if you have any sort of latency beyond what you'd get on a loopback interface (because requests are blocking and sequential for the moment).

/phschck

Reply via email to