Sergey Bugaev, le sam. 01 mai 2021 19:15:50 +0300, a ecrit: > On Sat, May 1, 2021 at 6:38 PM Samuel Thibault <samuel.thiba...@gnu.org> > wrote: > > Actually I'd say the pager should replace the cache. The pager is > > already a cache by itself, we should not need to keep both the pager and > > the cache, particularly since it means having to keep both coherent. > > Well, yes, I've considered that; but I've tried to keep the changes > less invasive than that.
I understand, but sometimes it's much better to be invasive than to try to keep the existing code, at the cost of contorsions. > This actually brings me to a question: why is tarfs using netfs over > diskfs? I don't know, Ludo? > on the other hand, the tar format, with its 512-byte > blocks, sounds very much like a filesystem image to me. isofs uses > diskfs, why doesn't tarfs? It's not exactly the same since you have compression in the way. But yes, that looks similar enough. > 1. S_io_write () > 2. lock the node, validate size/offset, grow it if needed > 3. pager_memcpy (), fault > 4. another thread (!) gets to pager_{read,write}_page () Ok, I was thinking something about this but didn't dive to make sure that was your scenario. > This is why the mutex cannot be made recursive: it's being grabbed > from the other thread. And this is also why I cannot just extract the > logic: I'm not calling the logic inside pager.c directly, I'm calling > pager_memcpy (), which faults and *that* causes > pager_{read,write}_page () to be called the same way it'd be called > for any other task faulting on the mapping. And > pager_{read,write}_page () naturally has to lock, to validate the > size, access the cache, and whatnot. > > Perhaps you can think of a solution? How does diskfs cope with this? Diskfs' pager_read_page does *not* have to lock the node, it just reads and returns the data. That's again a point where you see that having also the cache is in the way rather than helping. Samuel