[EMAIL PROTECTED], le Sun 16 Mar 2008 08:52:56 +0100, a écrit : > On Tue, Mar 11, 2008 at 11:19:32AM +0000, Samuel Thibault wrote: > > [EMAIL PROTECTED], le Tue 11 Mar 2008 04:53:45 +0100, a écrit : > > > > [I] suggested a more adaptive approach: Keep track of the existing > > > threads, and if none of them makes progress in a certain amount of > > > time (say 100 ms), allow creating some more threads. But that was > > > never implemented. Also, it still might cause considerable delays in > > > some situations; and I'm not even sure it would fix all problems. (I > > > didn't fully understand the problem discussed in this thread, so I > > > don't know whether it would be fixed by that?) > > > > The problem I was observing is when you have a sync_all which triggers > > the write of a lot of files, but unfortunately the superblock was > > paged out, so that you aren't able to start another thread to reload > > it. Whatever the thresholds you choose, with a big enough load you > > will still have the problem of resisting to creating enough threads > > for all these request, plus one for the superblock reload request. > > So the problem is that a lot of requests get queued before the first one > gets very far, so that when the superblock read is finally requested, it > ends up at the end of a long queue?
Yes. > What makes me wonder is, how can it happen in the first place that so > many requests are generated before the superblock is requested during > handling of the first one? ld-ing xulrunner, which needs a lot of memory (thus paging out superblock), and then suddenly needs to write a lot of data, which seemingly is not processed immediately, but on the periodical sync_all. > Or is it a scheduling issue perhaps, that the requesting thread > continues running after creating a request, rather than handling it > first?... That may be it. Samuel