On Thu, Jan 06, 2011 at 04:41:50PM +0000, Stefan Hajnoczi wrote: > Here are 4k sequential read results (cache=none) to check whether we > see an ioeventfd performance regression with virtio-blk. > > The idea is to use a small blocksize with an I/O pattern (sequential > reads) that is cheap and executes quickly. Therefore we're doing many > iops and the cost virtqueue kick/notify is especially important. > We're not trying to stress the disk, we're trying to make the > difference in ioeventfd=on/off apparent. > > I did 2 runs for both ioeventfd=off and ioeventfd=on. The results are > similar: 1% and 2% degradation in MB/s or iops. We'd have to do more > runs to see if the degradation is statistically significant, but the > percentage value is so low that I'm satisfied. > > Are you happy to merge virtio-ioeventfd v6 + your fixups?
BTW if you could do some migration stress-testing too, would be nice. autotest has support for it now. -- MST