> -----Original Message----- > From: Markus Armbruster [mailto:arm...@redhat.com] > Sent: 05 November 2018 15:58 > To: Paul Durrant <paul.durr...@citrix.com> > Cc: 'Kevin Wolf' <kw...@redhat.com>; Tim Smith <tim.sm...@citrix.com>; > Stefano Stabellini <sstabell...@kernel.org>; qemu-bl...@nongnu.org; qemu- > de...@nongnu.org; Max Reitz <mre...@redhat.com>; Anthony Perard > <anthony.per...@citrix.com>; xen-de...@lists.xenproject.org > Subject: Re: [Qemu-devel] xen_disk qdevification > > Paul Durrant <paul.durr...@citrix.com> writes: > > >> -----Original Message----- > >> From: Kevin Wolf [mailto:kw...@redhat.com] > >> Sent: 02 November 2018 11:04 > >> To: Tim Smith <tim.sm...@citrix.com> > >> Cc: xen-de...@lists.xenproject.org; qemu-devel@nongnu.org; qemu- > >> bl...@nongnu.org; Anthony Perard <anthony.per...@citrix.com>; Paul > Durrant > >> <paul.durr...@citrix.com>; Stefano Stabellini <sstabell...@kernel.org>; > >> Max Reitz <mre...@redhat.com>; arm...@redhat.com > >> Subject: xen_disk qdevification (was: [PATCH 0/3] Performance > improvements > >> for xen_disk v2) > >> > >> Am 02.11.2018 um 11:00 hat Tim Smith geschrieben: > >> > A series of performance improvements for disks using the Xen PV ring. > >> > > >> > These have had fairly extensive testing. > >> > > >> > The batching and latency improvements together boost the throughput > >> > of small reads and writes by two to six percent (measured using fio > >> > in the guest) > >> > > >> > Avoiding repeated calls to posix_memalign() reduced the dirty heap > >> > from 25MB to 5MB in the case of a single datapath process while also > >> > improving performance. > >> > > >> > v2 removes some checkpatch complaints and fixes the CCs > >> > >> Completely unrelated, but since you're the first person touching > >> xen_disk in a while, you're my victim: > >> > >> At KVM Forum we discussed sending a patch to deprecate xen_disk because > >> after all those years, it still hasn't been converted to qdev. Markus > is > >> currently fixing some other not yet qdevified block device, but after > >> that xen_disk will be the only one left. > >> > >> A while ago, a downstream patch review found out that there are some > QMP > >> commands that would immediately crash if a xen_disk device were present > >> because of the lacking qdevification. This is not the code quality > >> standard I envision for QEMU. It's time for non-qdev devices to go. > >> > >> So if you guys are still interested in the device, could someone please > >> finally look into converting it? > >> > > > > I have a patch series to do exactly this. It's somewhat involved as I > > need to convert the whole PV backend infrastructure. I will try to > > rebase and clean up my series a.s.a.p. > > Awesome! Please coordinate with Anthony Prerard to avoid duplicating > work if you haven't done so already.
I've come across a bit of a problem that I'm not sure how best to deal with and so am looking for some advice. I now have a qdevified PV disk backend but I can't bring it up because it fails to acquire a write lock on the qcow2 it is pointing at. This is because there is also an emulated IDE drive using the same qcow2. This does not appear to be a problem for the non-qdev xen-disk, presumably because it is not opening the qcow2 until the emulated device is unplugged and I don't really want to introduce similar hackery in my new backend (i.e. I want it to attach to its drive, and hence open the qcow2, during realize). So, I'm not sure what to do... It is not a problem that both a PV backend and an emulated device are using the same qcow2 because they will never actually operate simultaneously so is there any way I can bypass the qcow2 lock check when I create the drive for my PV backend? (BTW I tried re-using the drive created for the emulated device, but that doesn't work because there is a check if a drive is already attached to something). Any ideas? Paul