RE: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread Wenji Wu
>if you still have the test-setup, could you nevertheless try setting the >priority of the receiving TCP task to nice -20 and see what kind of >performance you get? A process with nice of -20 can easily get the interactivity status. When it expires, it still go back to the active array. It just h

RE: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread Wenji Wu
> It steals timeslices from other processes to complete tcp_recvmsg() > task, and only when it does it for too long, it will be preempted. > Processing backlog queue on behalf of need_resched() will break > fairness too - processing itself can take a lot of time, so process > can be scheduled away

RE: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread Wenji Wu
>The solution is really simple and needs no kernel change at all: if you >want the TCP receiver to get a larger share of timeslices then either >renice it to -20 or renice the other tasks to +19. Simply give a larger share of timeslices to the TCP receiver won't solve the problem. No matter what

RE: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-30 Thread Wenji Wu
>We can make explicitl preemption checks in the main loop of >tcp_recvmsg(), and release the socket and run the backlog if >need_resched() is TRUE. >This is the simplest and most elegant solution to this problem. I am not sure whether this approach will work. How can you make the explicit pree

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-29 Thread Wenji Wu
is impossible with the attachments you've used. > > > > Here you go - joined up, cleaned up, ported to mainline and test- > compiled. > That yield() will need to be removed - yield()'s behaviour is truly > awfulif the system is otherwise busy. What is it

Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-29 Thread Wenji Wu
Yes, when CONFIG_PREEMPT is disabled, the "problem" won't happen. That is why I put "for 2.6 desktop, low-latency desktop" in the uploaded paper. This "problem" happens in the 2.6 Desktop and Low-latency Desktop. >We could also pepper tcp_recvmsg() with some very carefully placed preemption >di

[patch 3/4] - Potential performance bottleneck for Linxu TCP

2006-11-29 Thread Wenji Wu
From: Wenji Wu <[EMAIL PROTECTED]> Greetings, For Linux TCP, when the network applcaiton make system call to move data from socket's receive buffer to user space by calling tcp_recvmsg(). The socket will be locked. During the period, all the incoming packet for the TCP socket wil

[patch 2/4] - Potential performance bottleneck for Linxu TCP

2006-11-29 Thread Wenji Wu
From: Wenji Wu <[EMAIL PROTECTED]> Greetings, For Linux TCP, when the network applcaiton make system call to move data from socket's receive buffer to user space by calling tcp_recvmsg(). The socket will be locked. During the period, all the incoming packet for the TCP socket wil

[patch 1/4] - Potential performance bottleneck for Linxu TCP

2006-11-29 Thread Wenji Wu
From: Wenji Wu <[EMAIL PROTECTED]> Greetings, For Linux TCP, when the network applcaiton make system call to move data from socket's receive buffer to user space by calling tcp_recvmsg(). The socket will be locked. During the period, all the incoming packet for the TCP socket wil

[Changelog] - Potential performance bottleneck for Linxu TCP

2006-11-29 Thread Wenji Wu
From: Wenji Wu <[EMAIL PROTECTED]> Greetings, For Linux TCP, when the network applcaiton make system call to move data from socket's receive buffer to user space by calling tcp_recvmsg(). The socket will be locked. During the period, all the incoming packet for the TCP socket wil

[patch 4/4] - Potential performance bottleneck for Linxu TCP

2006-11-29 Thread Wenji Wu
From: Wenji Wu <[EMAIL PROTECTED]> Greetings, For Linux TCP, when the network applcaiton make system call to move data from socket's receive buffer to user space by calling tcp_recvmsg(). The socket will be locked. During the period, all the incoming packet for the TCP socket wil