Hello :)
with the recent advancements in Subhurd(TM) technology, our lack of
accounting is becoming more and more apparent. The challenge with
accounting is of course to attribute resources spent to the client
that actually asked some server for some work.
Also, with our current thread model
On Wed, 2008-03-19 at 21:48 +0100, [EMAIL PROTECTED] wrote:
> > Now the basic idea behind using one kernel thread to handle several
> > user threads is that when a user thread *would* block, you don't let
> > it block, instead you just take it away and run some other user
> > thread. That works ve
At Wed, 19 Mar 2008 18:35:56 -0400,
Thomas Bushnell BSG wrote:
> On Wed, 2008-03-19 at 17:56 +0100, Neal H. Walfield wrote:
> > At Wed, 19 Mar 2008 09:58:57 -0400,
> > Thomas Bushnell BSG wrote:
> > > And throwing a big wrinkle into all that is that many architectures do
> > > not make it *possible
Hi,
On Wed, Mar 19, 2008 at 09:51:16AM -0400, Thomas Bushnell BSG wrote:
> The heap is demand paged virtual memory, for which the kernel must
> maintain memory maps, as Neal was saying.
Well yes; but that's true for any kind of memory allocation -- it's not
at all specific to the problem of havi
Hi,
On Wed, Mar 19, 2008 at 09:58:57AM -0400, Thomas Bushnell BSG wrote:
> On Tue, 2008-03-18 at 10:14 +0100, [EMAIL PROTECTED] wrote:
> > I must admit that I do not fully understand the relation between
> > filesystems and paging yet... Probably this is what I really meant
> > to say :-)
>
> He
On Wed, 2008-03-19 at 17:56 +0100, Neal H. Walfield wrote:
> At Wed, 19 Mar 2008 09:58:57 -0400,
> Thomas Bushnell BSG wrote:
> > And throwing a big wrinkle into all that is that many architectures do
> > not make it *possible* for users to handle page faults. The processor
> > dumps a load of cr
Hi,
On Wed, Mar 19, 2008 at 10:45:15AM +, Samuel Thibault wrote:
> [EMAIL PROTECTED], le Tue 18 Mar 2008 11:02:43 +0100, a écrit :
> > I don't know how the syncing works, so I can't really tell what the
> > problem is. If there are blocking points before the superblock read,
> > we need to ch
At Wed, 19 Mar 2008 09:58:57 -0400,
Thomas Bushnell BSG wrote:
> And throwing a big wrinkle into all that is that many architectures do
> not make it *possible* for users to handle page faults. The processor
> dumps a load of crap on the stack, and the kernel must preserve it
> carefully and then
Michal Suchanek, le Wed 19 Mar 2008 16:55:48 +0100, a écrit :
> On 19/03/2008, Samuel Thibault <[EMAIL PROTECTED]> wrote:
> > Yes, that's what I meant actually: the diskfs_sync_everything() function
> > is able to trigger a lot of thread creations.
> >
> > A way to have things work correctly woul
On 19/03/2008, Samuel Thibault <[EMAIL PROTECTED]> wrote:
>
> Yes, that's what I meant actually: the diskfs_sync_everything() function
> is able to trigger a lot of thread creations.
>
> A way to have things work correctly would be by marking threads with a
> "level", i.e. diskfs_sync_everythin
On Tue, 2008-03-18 at 10:14 +0100, [EMAIL PROTECTED] wrote:
> Hi,
>
> On Mon, Mar 17, 2008 at 07:00:02PM -0400, Thomas Bushnell BSG wrote:
>
> > On Sun, 2008-03-16 at 08:25 +0100, [EMAIL PROTECTED] wrote:
>
> > > We could move the servers one by one -- starting with the disk
> > > filesystems,
On Tue, 2008-03-18 at 10:26 +0100, [EMAIL PROTECTED] wrote:
> > You need kernel memory for the memory maps, at least one for each user
> > thread.
>
> No I don't. That's precisely where it is *not* equivalent.
>
> In the model I described, the state structures for the blocked requests
> (I prefe
[EMAIL PROTECTED], le Tue 18 Mar 2008 11:02:43 +0100, a écrit :
> On Mon, Mar 17, 2008 at 10:41:01AM +, Samuel Thibault wrote:
> > [EMAIL PROTECTED], le Sun 16 Mar 2008 08:52:56 +0100, a écrit :
>
> > > What makes me wonder is, how can it happen in the first place that
> > > so many requests a
Hi,
On Mon, Mar 17, 2008 at 10:47:40AM +0100, Neal H. Walfield wrote:
> At Sun, 16 Mar 2008 08:21:19 +0100, <[EMAIL PROTECTED]> wrote:
> > On Wed, Mar 12, 2008 at 05:12:03PM +0100, Marcus Brinkmann wrote:
> > > As for the threading model, more than one kernel thread per real
> > > CPU doesn't see
Hi,
On Mon, Mar 17, 2008 at 07:00:02PM -0400, Thomas Bushnell BSG wrote:
> On Sun, 2008-03-16 at 08:25 +0100, [EMAIL PROTECTED] wrote:
> > We could move the servers one by one -- starting with the disk
> > filesystems, as this is where the issues are manifesting most...
>
> But this is still no
Hi,
On Mon, Mar 17, 2008 at 10:50:31AM +0100, Neal H. Walfield wrote:
> At Sun, 16 Mar 2008 08:01:22 +0100, <[EMAIL PROTECTED]> wrote:
> > On Tue, Mar 11, 2008 at 12:10:17PM +0100, Neal H. Walfield wrote:
> > > > using some kind of continuation mechanism: Have a limited number
> > > > of threads
te further requests. Why is that not the case?
This is really the main problem here. Even if we change the thread model
such that the request storm doesn't result in a deadlock (with thread
limiting) or resource exhaustion (without limiting), it will still
result in terrible performance. W
On Sun, 2008-03-16 at 08:25 +0100, [EMAIL PROTECTED] wrote:
> Hi,
>
> On Wed, Mar 12, 2008 at 03:56:47PM -0400, Thomas Bushnell BSG wrote:
>
> > The clever way is to identify the particular things in the stack which
> > must be saved, and throw the rest away, and then restart the
> > continuatio
Thomas Bushnell BSG, le Mon 17 Mar 2008 15:09:12 -0400, a écrit :
>
> On Sun, 2008-03-16 at 08:52 +0100, [EMAIL PROTECTED] wrote:
> > Hi,
> >
> > On Tue, Mar 11, 2008 at 11:19:32AM +, Samuel Thibault wrote:
> > > [EMAIL PROTECTED], le Tue 11 Mar 2008 04:53:45 +0100, a écrit :
> >
> > > > [I]
On Sun, 2008-03-16 at 08:52 +0100, [EMAIL PROTECTED] wrote:
> Hi,
>
> On Tue, Mar 11, 2008 at 11:19:32AM +, Samuel Thibault wrote:
> > [EMAIL PROTECTED], le Tue 11 Mar 2008 04:53:45 +0100, a écrit :
>
> > > [I] suggested a more adaptive approach: Keep track of the existing
> > > threads, and
[EMAIL PROTECTED], le Sun 16 Mar 2008 08:52:56 +0100, a écrit :
> On Tue, Mar 11, 2008 at 11:19:32AM +, Samuel Thibault wrote:
> > [EMAIL PROTECTED], le Tue 11 Mar 2008 04:53:45 +0100, a écrit :
>
> > > [I] suggested a more adaptive approach: Keep track of the existing
> > > threads, and if no
At Sun, 16 Mar 2008 08:01:22 +0100,
<[EMAIL PROTECTED]> wrote:
> On Tue, Mar 11, 2008 at 12:10:17PM +0100, Neal H. Walfield wrote:
>
> > > using some kind of continuation mechanism: Have a limited number of
> > > threads (ideally one per CPU) handle incoming requests. Whenever
> > > some operatio
At Sun, 16 Mar 2008 08:21:19 +0100,
<[EMAIL PROTECTED]> wrote:
> On Wed, Mar 12, 2008 at 05:12:03PM +0100, Marcus Brinkmann wrote:
>
> > As for the threading model, more than one kernel thread per real CPU
> > doesn't seem to make much sense in most cases.
>
> Well, add a "per processing step" to
Hi,
On Wed, Mar 12, 2008 at 03:56:47PM -0400, Thomas Bushnell BSG wrote:
> The clever way is to identify the particular things in the stack which
> must be saved, and throw the rest away, and then restart the
> continuation with the few things that really matter. This is what the
> kernel does i
Hi,
On Tue, Mar 11, 2008 at 11:19:32AM +, Samuel Thibault wrote:
> [EMAIL PROTECTED], le Tue 11 Mar 2008 04:53:45 +0100, a écrit :
> > [I] suggested a more adaptive approach: Keep track of the existing
> > threads, and if none of them makes progress in a certain amount of
> > time (say 100 ms
Hi,
On Tue, Mar 11, 2008 at 12:10:17PM +0100, Neal H. Walfield wrote:
> > using some kind of continuation mechanism: Have a limited number of
> > threads (ideally one per CPU) handle incoming requests. Whenever
> > some operation would require blocking for some event (in the case of
> > diskfs,
Hi,
On Wed, Mar 12, 2008 at 05:12:03PM +0100, Marcus Brinkmann wrote:
> As for the threading model, more than one kernel thread per real CPU
> doesn't seem to make much sense in most cases.
Well, add a "per processing step" to make this statement more generally
true. In some cases, it's useful t
On Wed, 2008-03-12 at 20:46 +0100, Neal H. Walfield wrote:
> At Wed, 12 Mar 2008 15:32:26 -0400,
> Thomas Bushnell BSG wrote:
> > On Tue, 2008-03-11 at 12:10 +0100, Neal H. Walfield wrote:
> > > What you are suggesting is essentially using a user-level thread
> > > package. (Compacting a thread's
At Wed, 12 Mar 2008 15:32:26 -0400,
Thomas Bushnell BSG wrote:
> On Tue, 2008-03-11 at 12:10 +0100, Neal H. Walfield wrote:
> > What you are suggesting is essentially using a user-level thread
> > package. (Compacting a thread's state in the form of a closure is a
> > nice optimization, but the mo
On Tue, 2008-03-11 at 12:10 +0100, Neal H. Walfield wrote:
> What you are suggesting is essentially using a user-level thread
> package. (Compacting a thread's state in the form of a closure is a
> nice optimization, but the model essentially remains the same.) The
> main advantage to user-level
Hi,
At Tue, 11 Mar 2008 12:10:17 +0100,
Neal H. Walfield wrote:
> What you are suggesting is essentially using a user-level thread
> package. (Compacting a thread's state in the form of a closure is a
> nice optimization, but the model essentially remains the same.) The
> main advantage to user-
Hello,
[EMAIL PROTECTED], le Tue 11 Mar 2008 04:53:45 +0100, a écrit :
> [I] suggested a more adaptive approach: Keep track of the existing
> threads, and if none of them makes progress in a certain amount of
> time (say 100 ms), allow creating some more threads. But that was
> never implemented.
Olaf,
> The real solution here of course is to fix the thread model
I fully agree that given Mach's architecture, one kernel thread per
extant RPC is the wrong approach.
> using some kind of continuation mechanism: Have a limited number of
> threads (ideally one per CPU) h
er of threads is inherently buggy in the
> Hurd, and that patch MUST be disabled for anything to work properly.
I'm glad this discussion came up at last: This is a very serious issue,
and your input is necessary.
The real problem here is that the current thread model of the Hurd
servers i
34 matches
Mail list logo