[ Moving internal discussion to upstream list - with right address now ] Am 30.03.2010 17:44, schrieb Juan Quintela: > Kevin Wolf <kw...@redhat.com> wrote: >> Am 30.03.2010 16:21, schrieb Juan Quintela: > >> >> So you would have a function that "almost closes" the image and another >> one that basically re-opens, but uses an old fd? > > we are doing now: > > fd = open(file, ro); > do stuff > close(fd); > fd = open(file, rw); > do more stuff; > close(fd); > fd = open(file,ro); > continue > > (error handling removed) > > what I want is somthengi like: > > fd = open(file,ro); > do stuff > // we want todo some rw stuff > mark fd as not usable > flush(fd) // for some flush op > fd2 = open(file, rw); > do write stuff; > close(fd2); > mark fd as usable again > > why? Think that there is one error, in the 1st case, you are supposing > that you are going to be able to open it again ro. In my case it is > still open as ro. The only problem is finding/defining an appropiate > "flush()" operation. Althought that flush operation could be used in > more places (before migration/savevm/....)
Yes, I did understand the problem you see. But it happens to be a nice summary for the upstream list now. :-) >> I haven't had a detailed look yet, but probably the part where it opens >> its image storage is common for all block drivers. So maybe the >> BlockDriver.bdrv_open interface could be changed in a way that block >> drivers get their "parent" bs as a parameter from the generic block >> layer. Same goes for bdrv_close. > > Something like that could do the trick. I'll have a look. However I think this needs to have the clean image format/protocol separation first. Otherwise we would have some nasty special cases (most importantly, raw). >>> That is part of my problem. Having bdrv_open_file() to just do that >>> could be an start, but I haven't looked at all its uses. I agree that >>> if we know that we want to read a file, we shouldn't be using >>> bdrv_open2() machinery. >> >> We wouldn't need -snapshot and backing files, right. The rest is common >> code, I think. We could probably separate that out, but that makes >> another call in the chain. ;-) > > my problem is not the number of calls (and open calls should be cheap, > never called on hot paths). What I want is some rules that can be > understood. Current code requires that you grep for all uses of a > function before you ever think of doing any change. I'm not talking about performance here, but about clarity. I think the solution to the problem that you need to grep every caller is a completely different one, though: Comments should specify what exactly a function is meant to do, so that you can rely on that specification. No code reorganization is going to provide you an interface specification. >>>> This is why we had the problems where management opened files with -f >>>> raw (logically, they _are_ raw images), but rather had to open >>>> host_device or something. This definitely needs a cleanup. >>>> >>>> Other than that I see little potential to make the call chain shorter. >>>> It's just the layers that need to be there to provide the functionality. >>> >>> It is not shorter, it is clear. >>> bdrv_open, brdv_open2, bdrv_open_file: we can remove at least one. >> >> bdrv_open is a simple wrapper, so it's easy to remove it. > > that would be one start :) I'll send a patch. >>>> If you can think of any concrete points, please tell me. Maybe it's just >>>> that I'm already used to the confusing way it is and you can come up >>>> with something simpler. >>> >>> I haven't thought more about this, but basically: >>> - a way to open a file, knowing that it is not going to try it as raw, >>> guess things, etc >> >> We still need to distinguish file/host_device/nbd... > > normally we know already. The guessing stuff has bitten us already in > the past if I remember correctly. That was guessing image formats, not protocols. IIRC, we still do guess things with host_device/host_floppy/host_cdrom. I mean, we could require explicit specification of the protocol, so you'd need to say host_device:/dev/foo and file:bar.qcow2 but I don't see how it would improve things - and compatibility completely forbids this anyway. >>> - read-only stuff: if we ever want to write to a file, open it read >>> write, otherwise open it read only. re-opening it looks wrong >> >> So backing files should always be opened read-write? > > If we plan to write them, yes. > >> So you can't use >> read-only files for backing files now that the fallback is disabled? > > Oh, we can, but then we will never write to them. > > my problem is: > > open ro; > do stuff > //wait it would be a good idea to write there > close ro > open rw > write something > close rw > open ro > > And that is basicaly what we are doing now. If we try to open the file > rw, is because we think that we are going to write to it -> fail if file > can't be open rw. Open read only if we got asked for it. Current code > does something like: > > if the user ask me to open a file read write, but I can only open it > read only, what the user means is that he wants it to be open read only. No, we don't do that. If the file was read-only before, it stays read-only after bdrv_commit(). If it was rw, it stays rw - and if that fails it won't be opened at all. But bdrv_commit() doesn't even reopen anything when the file was rw anyway. Kevin