Re: [Beowulf] automount on high ports

2008-07-02 Thread Carsten Aulbert
Hi all, Greg Lindahl wrote: > On Wed, Jul 02, 2008 at 06:31:51PM -0400, Perry E. Metzger wrote: > >> It isn't quite that bad. You can use one of the SO_REUSE* calls in the >> code to make things less dire. Apparently the kernel doesn't do that >> for NFS client connection establishment, though. T

[Beowulf] Re: dealing with lots of sockets

2008-07-02 Thread Perry E. Metzger
"Robert G. Brown" <[EMAIL PROTECTED]> writes: > I'm not quite sure what you mean by "vast numbers of teeny high > latency requests" so I'm not sure if we really are disagreeing or > agreeing in different words. I mostly have worried about such schemes in the case of, say, 10,000 people connecting

Re: dealing with lots of sockets (was Re: [Beowulf] automount on high ports)

2008-07-02 Thread Greg Lindahl
On Wed, Jul 02, 2008 at 08:48:32PM -0400, Lawrence Stewart wrote: > Back in 1994, with 90 MHz pentiums, NCSA's httpd was the leading > webserver with a design that forked a new process for every request. Apache eventually moved to a model where forked processes handled several requests serially b

Re: dealing with lots of sockets (was Re: [Beowulf] automount on high ports)

2008-07-02 Thread Lawrence Stewart
Sure, but it is way inefficient. Every single process you fork means another data segment, another stack segment, which means lots of memory. Every process you fork also means that concurrency is achieved only by context switching, which means loads of expense on changing MMU state and more.

Re: dealing with lots of sockets (was Re: [Beowulf] automount on high ports)

2008-07-02 Thread Robert G. Brown
On Wed, 2 Jul 2008, Perry E. Metzger wrote: "Robert G. Brown" <[EMAIL PROTECTED]> writes: Well, it actually kind of is. Typically, a box in an HPC cluster is running stuff that's compute bound and who's primary job isn't serving vast numbers of teeny high latency requests. That's much more wha

[Beowulf] Re: OT: LTO Ultrium (3) throughput? (Steve Cousins)

2008-07-02 Thread Steve Cousins
On Wed, 2 Jul 2008, David Mathog wrote: Steve Cousins <[EMAIL PROTECTED]> wrote: Do different LTO-3 drives have different maximum tape write speeds? I don't know. I've always heard 80 MB/sec. lto.org shows: http://www.lto.org/technology/ugen.php?section=0&subsec=ugen for lto-3 "up to 160

[Beowulf] Re: OT: LTO Ultrium (3) throughput?

2008-07-02 Thread Steve Cousins
On Wed, 2 Jul 2008, David Mathog wrote: Rats. I wonder what the difference is now? If you don't already have it, please grab a copy of Exabyte's ltoTool from here: http://www.exabyte.com/support/online/downloads/downloads.cfm?did=1344&prod_id=581 % /usr/local/src/ltotool/ltoTool -C 1 /dev/n

Re: [Beowulf] automount on high ports

2008-07-02 Thread Greg Lindahl
On Wed, Jul 02, 2008 at 06:31:51PM -0400, Perry E. Metzger wrote: > It isn't quite that bad. You can use one of the SO_REUSE* calls in the > code to make things less dire. Apparently the kernel doesn't do that > for NFS client connection establishment, though. There is probably > some code to fix

Re: [Beowulf] automount on high ports

2008-07-02 Thread Perry E. Metzger
Greg Lindahl <[EMAIL PROTECTED]> writes: > Go look at code that actually uses priv ports to connect out. Normally > the port is picked in the connect() call, and that means you can have > all the 4-tuples. But for priv ports, you have to loop trying specific > candidate ports under 1024 until you

[Beowulf] Re: OT: LTO Ultrium (3) throughput? (Steve Cousins)

2008-07-02 Thread David Mathog
Steve Cousins <[EMAIL PROTECTED]> wrote: > > Just under 60MB/sec seems to be the maximum tape transport read/write > > limit. Pretty reliably the first write from the beginning of tape was a > > bit slower than writes started further into the tape. > > I believe LTO-3 is rated at 80 MB/sec witho

[Beowulf] Re: OT: LTO Ultrium (3) throughput?

2008-07-02 Thread David Mathog
Steve Cousins <[EMAIL PROTECTED]> wrote > David Mathog wrote: > > Just under 60MB/sec seems to be the maximum tape transport read/write > > limit. Pretty reliably the first write from the beginning of tape was a > > bit slower than writes started further into the tape. > > I believe LTO-3 is rate

Re: [Beowulf] automount on high ports

2008-07-02 Thread Greg Lindahl
On Wed, Jul 02, 2008 at 08:28:48AM -0400, Perry E. Metzger wrote: > None > of this should cause you to run out of ports, period. If you don't > understand that, refer back to my original message. A TCP socket is a > unique 4-tuple. The host:port 2-tuples are NOT unique and not an > exhaustible res

Re: dealing with lots of sockets (was Re: [Beowulf] automount on high ports)

2008-07-02 Thread Bruno Coutinho
2008/7/2 Perry E. Metzger <[EMAIL PROTECTED]>: > > "Robert G. Brown" <[EMAIL PROTECTED]> writes: > >> Well, it actually kind of is. Typically, a box in an HPC cluster is > >> running stuff that's compute bound and who's primary job isn't serving > >> vast numbers of teeny high latency requests. Th

Re: [Beowulf] software for compatible with a cluster

2008-07-02 Thread Bernard Li
On Wed, Jul 2, 2008 at 4:32 AM, Perry E. Metzger <[EMAIL PROTECTED]> wrote: > "Jon Aquilina" <[EMAIL PROTECTED]> writes: >> if i use blender how nicely does it work in a cluster? > > I believe it works quite well. As far as I know blender does not have any built-in "clustering" capabilities. But

[Beowulf] Re: OT: LTO Ultrium (3) throughput?

2008-07-02 Thread Steve Cousins
Just under 60MB/sec seems to be the maximum tape transport read/write limit. Pretty reliably the first write from the beginning of tape was a bit slower than writes started further into the tape. I believe LTO-3 is rated at 80 MB/sec without compression. Testing it on our HP unit in an Over

dealing with lots of sockets (was Re: [Beowulf] automount on high ports)

2008-07-02 Thread Perry E. Metzger
"Robert G. Brown" <[EMAIL PROTECTED]> writes: >> Well, it actually kind of is. Typically, a box in an HPC cluster is >> running stuff that's compute bound and who's primary job isn't serving >> vast numbers of teeny high latency requests. That's much more what a >> web server does. However... > >

Re: [Beowulf] automount on high ports

2008-07-02 Thread Robert G. Brown
On Wed, 2 Jul 2008, Perry E. Metzger wrote: "Robert G. Brown" <[EMAIL PROTECTED]> writes: On Wed, 2 Jul 2008, Perry E. Metzger wrote: By the way, you can now design daemons to handle tens of thousands of simultaneous connections with clean event driven design on a modern multiprocessor with p

Re: [Beowulf] automount on high ports

2008-07-02 Thread Robert G. Brown
On Wed, 2 Jul 2008, Bogdan Costescu wrote: On Wed, 2 Jul 2008, Robert G. Brown wrote: The way TCP daemons that listen on a well-known/privileged port work is that they accept a connection on that port, then fork a connection on a higher unprivileged (>1023) port on both ends so that the daemo

Re: [Beowulf] automount on high ports

2008-07-02 Thread Perry E. Metzger
Bogdan Costescu <[EMAIL PROTECTED]> writes: > 'man 7 socket' and look up SO_REUSEADDR. Incidently, I believe this may be part of the problem for the NFS client code in Linux. -- Perry E. Metzger[EMAIL PROTECTED] ___ Beowulf mailing lis

Re: [Beowulf] automount on high ports

2008-07-02 Thread Perry E. Metzger
"Robert G. Brown" <[EMAIL PROTECTED]> writes: > On Wed, 2 Jul 2008, Perry E. Metzger wrote: >> By the way, you can now design daemons to handle tens of thousands of >> simultaneous connections with clean event driven design on a modern >> multiprocessor with plenty of memory. This is way off topic

Re: [Beowulf] automount on high ports

2008-07-02 Thread Robert G. Brown
On Wed, 2 Jul 2008, Perry E. Metzger wrote: You don't switch to a different port number after the connection comes in, you stay on it. You can in theory talk to up to (nearly) 2^48 different foreign host/port combos off of local port 25, because every remote host/remote port pair makes for a dif

Re: [Beowulf] automount on high ports

2008-07-02 Thread Bogdan Costescu
On Wed, 2 Jul 2008, Robert G. Brown wrote: The way TCP daemons that listen on a well-known/privileged port work is that they accept a connection on that port, then fork a connection on a higher unprivileged (>1023) port on both ends so that the daemon can listen once again. 'man 7 socket' an

Re: [Beowulf] automount on high ports

2008-07-02 Thread Scott Atchley
On Jul 2, 2008, at 10:09 AM, Gerry Creager wrote: Although I believe Lustre's robustness is very good these days, I do not believe that it will not work in your setting. I think that they currently do not recommend mounting a client on a node that is also working as a server as you are doin

Re: [Beowulf] A press release

2008-07-02 Thread Prentice Bisbal
Mark Hahn wrote: >> Hmmm for me, its all about the kernel. Thats 90+% of the battle. >> Some distros use good kernels, some do not. I won't mention who I >> think is in the latter category. > > I was hoping for some discussion of concrete issues. for instance, > I have the impression debi

Re: [Beowulf] automount on high ports

2008-07-02 Thread Perry E. Metzger
"Robert G. Brown" <[EMAIL PROTECTED]> writes: > On Wed, 2 Jul 2008, Carsten Aulbert wrote: >> Which corresponds exactly to the maximum achievable mounts of 358 right >> now. Besides, I'm far from being an expert on TCP/IP, but is it possible >> for a local process to bind to a port which is alread

Re: [Beowulf] Re: energy costs and poor grad students

2008-07-02 Thread Peter St. John
Mark, Would it be feasible to downclock your three nodes? All you physicists know better than I, that the power draw and heat production are not linear in GHz. A 1 GHz processor is less than half the cost per tick than a 2GHz, so if power budget is more urgent for you than time to completion then t

Re: [Beowulf] A press release

2008-07-02 Thread Prentice Bisbal
Mark Hahn wrote: does it necessarily have to be a redhat clone. can it also be a debian based clone? >>> >>> but why? is there some concrete advantage to using Debian? >>> I've never understood why Debian users tend to be very True Believer, >>> or what it is that hooks them. >> >>

Re: [Beowulf] Re: energy costs and poor grad students

2008-07-02 Thread Mark Kosmowski
On 7/2/08, Joe Landman <[EMAIL PROTECTED]> wrote: > Hi Mark > > Mark Kosmowski wrote: > > I'm in the US. I'm almost, but not quite ready for production runs - > > still learning the software / computational theory. I'm the first > > person in the research group (physical chemistry) to try to lear

Re: [Beowulf] automount on high ports

2008-07-02 Thread Robert G. Brown
On Wed, 2 Jul 2008, Carsten Aulbert wrote: Which corresponds exactly to the maximum achievable mounts of 358 right now. Besides, I'm far from being an expert on TCP/IP, but is it possible for a local process to bind to a port which is already in use but to another host? I don't think so, but may

Re: [Beowulf] A press release

2008-07-02 Thread Perry E. Metzger
"Robert G. Brown" <[EMAIL PROTECTED]> writes: >> Precisely. It pays to allow people to use what they want. Fewer >> religious battles that way. Whether one distro or another has an >> advantage isn't the point -- people have their own tastes and it >> doesn't pay to tell them "no" without good rea

Re: [Beowulf] A press release

2008-07-02 Thread Robert G. Brown
On Tue, 1 Jul 2008, Perry E. Metzger wrote: Prentice Bisbal <[EMAIL PROTECTED]> writes: does it necessarily have to be a redhat clone. can it also be a debian based clone? but why? is there some concrete advantage to using Debian? I've never understood why Debian users tend to be very True

Re: [Beowulf] automount on high ports

2008-07-02 Thread Perry E. Metzger
Bogdan Costescu <[EMAIL PROTECTED]> writes: >> Every machine might get 1341 connections from clients, and every >> machine might make 1341 client connections going out to other >> machines None of this should cause you to run out of ports, period. > > With all due respect, I think that you are not

Re: [Beowulf] automount on high ports

2008-07-02 Thread Perry E. Metzger
Henning Fehrmann <[EMAIL PROTECTED]> writes: >> Which corresponds exactly to the maximum achievable mounts of 358 right > > 359 ;) > > If the number of mounts is smaller the ports are randomly used in this range. > It would be convenient to enter the insecure area. > Using the option insecure for

Re: [Beowulf] automount on high ports

2008-07-02 Thread Perry E. Metzger
Skip to the bottom for advice on how to make NFS only use non-prived ports. My guess is still that it isn't priv ports that are causing trouble, but I describe at the bottom what you need to do to get rid of that issue entirely. I'd advise reading the rest, but the part about how to disable the st

Re: [Beowulf] Re: energy costs and poor grad students

2008-07-02 Thread Nathan Moore
Does your university have public computer labs? Do the computers run some variant of Unix? At UMN, where I did my grad work in physics, there were a number of semi-public "Scientific Visualization" or "Large Data Analysis" labs that were hosted in the local supercomputer center. The center there

Re: [Beowulf] automount on high ports

2008-07-02 Thread Bogdan Costescu
On Wed, 2 Jul 2008, Perry E. Metzger wrote: A given client would need to be forming over 1000 connections to a given server NFS port for that to be a problem. Not quite. The reserved ports that are free for use (512 and up) are not all free to be taken by NFS as it pleases - there are many da

Re: [Beowulf] automount on high ports

2008-07-02 Thread Gerry Creager
Scott Atchley wrote: On Jul 2, 2008, at 7:22 AM, Carsten Aulbert wrote: Bogdan Costescu wrote: Have you considered using a parallel file system ? We looked a bit into a few, but would love to get any input from anyone on that. What we found so far was not really convincing, e.g. glusterFS a

Re: [Beowulf] automount on high ports

2008-07-02 Thread Henning Fehrmann
> Which corresponds exactly to the maximum achievable mounts of 358 right 359 ;) If the number of mounts is smaller the ports are randomly used in this range. It would be convenient to enter the insecure area. Using the option insecure for the NFS exports is apparently not sufficient. Also every

Re: [Beowulf] Re: energy costs and poor grad students

2008-07-02 Thread Joe Landman
Hi Mark Mark Kosmowski wrote: I'm in the US. I'm almost, but not quite ready for production runs - still learning the software / computational theory. I'm the first person in the research group (physical chemistry) to try to learn plane wave methods of solid state calculation as opposed to iso

[Beowulf] Re: energy costs and poor grad students

2008-07-02 Thread Mark Kosmowski
I'm in the US. I'm almost, but not quite ready for production runs - still learning the software / computational theory. I'm the first person in the research group (physical chemistry) to try to learn plane wave methods of solid state calculation as opposed to isolated atom-centered approximation

Re: [Beowulf] software for compatible with a cluster

2008-07-02 Thread Jon Aquilina
like you said in regards to maya money is a factor for me. if i do descide to setup a rendering cluster my problem is going to be finding someone who can make a small video in blender for me so i can render it. On 7/2/08, Greg Byshenk <[EMAIL PROTECTED]> wrote: > > On Wed, Jul 02, 2008 at 07:32:55

Re: [Beowulf] automount on high ports

2008-07-02 Thread Carsten Aulbert
Hi Perry, Perry E. Metzger wrote: > > Okay. In this instance, you're not going to run out of ports. Every > machine might get 1341 connections from clients, and every machine > might make 1341 client connections going out to other machines. None > of this should cause you to run out of ports, per

Re: [Beowulf] automount on high ports

2008-07-02 Thread Joe Landman
Carsten Aulbert wrote: The clients are connecting from ports below 1024 because Berkeley set up a hack in the original BSD stack so that only root could open ports below 1024. This way, you could "know" the process on the remote host was a root process, thus you could feel "secure" [sic]. It doe

Re: [Beowulf] automount on high ports

2008-07-02 Thread Perry E. Metzger
Carsten Aulbert <[EMAIL PROTECTED]> writes: >> The clients are connecting from ports below 1024 because Berkeley set >> up a hack in the original BSD stack so that only root could open ports >> below 1024. This way, you could "know" the process on the remote host >> was a root process, thus you co

Re: [Beowulf] automount on high ports

2008-07-02 Thread Scott Atchley
On Jul 2, 2008, at 7:22 AM, Carsten Aulbert wrote: Bogdan Costescu wrote: Have you considered using a parallel file system ? We looked a bit into a few, but would love to get any input from anyone on that. What we found so far was not really convincing, e.g. glusterFS at that time was no

Re: [Beowulf] automount on high ports

2008-07-02 Thread Perry E. Metzger
Tim Cutts <[EMAIL PROTECTED]> writes: > On 2 Jul 2008, at 8:26 am, Carsten Aulbert wrote: > >> OK, we have 1342 nodes which act as servers as well as clients. Every >> node exports a single local directory and all other nodes can mount >> this. >> >> What we do now to optimize the available bandwi

Re: [Beowulf] software for compatible with a cluster

2008-07-02 Thread Perry E. Metzger
"Jon Aquilina" <[EMAIL PROTECTED]> writes: > if i use blender how nicely does it work in a cluster? I believe it works quite well. Perry ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http:

Re: [Beowulf] A press release

2008-07-02 Thread Tim Cutts
On 2 Jul 2008, at 12:16 pm, Jon Aquilina wrote: one thing must not be forgotten though. in regards to pkging stuff for the ubuntu variation once someone like you and me you upload it for someone higher up on the chain to check and upload to the servers. so basically someone is checking wh

Re: [Beowulf] automount on high ports

2008-07-02 Thread Carsten Aulbert
Hi Bogdan, Bogdan Costescu wrote: > > Have you considered using a parallel file system ? We looked a bit into a few, but would love to get any input from anyone on that. What we found so far was not really convincing, e.g. glusterFS at that time was not really stable, lustre was too easy to cras

Re: [Beowulf] A press release

2008-07-02 Thread Jon Aquilina
one thing must not be forgotten though. in regards to pkging stuff for the ubuntu variation once someone like you and me you upload it for someone higher up on the chain to check and upload to the servers. so basically someone is checking what someone else has packaged. On 7/2/08, Tim Cutts <[EMAI

Re: [Beowulf] A press release

2008-07-02 Thread Jon Aquilina
im also not sure what support is like in other distros but i commend the kubuntu volunteers who man that irc channel for support as well as those who help with development. are there any other distros that provide support like this? On 7/2/08, Jon Aquilina <[EMAIL PROTECTED]> wrote: > > one thing

Re: [Beowulf] A press release

2008-07-02 Thread Bogdan Costescu
On Wed, 2 Jul 2008, Tim Cutts wrote: The difficulty is that many ISVs tend to do a fairly terrible job of packaging their applications as RPM's or DEB's I very much agree with this. While you mentioned init scripts that don't fit the distribution, I can add init scripts that are totally miss

Re: [Beowulf] automount on high ports

2008-07-02 Thread Bogdan Costescu
On Wed, 2 Jul 2008, Carsten Aulbert wrote: OK, we have 1342 nodes which act as servers as well as clients. Every node exports a single local directory and all other nodes can mount this. Have you considered using a parallel file system ? What we do now to optimize the available bandwidth and

Re: [Beowulf] A press release

2008-07-02 Thread Tony Travis
Mark Hahn wrote: [...] but I ask again: what are the reasons one might prefer using debian? really, I'm not criticizing it - I really would like to know why it would matter whether someone (such as ClusterVisionOS (tm)) would use debian or another distro. Hello, Mark. I've been on a well tro

Re: [Beowulf] A press release

2008-07-02 Thread Tim Cutts
On 2 Jul 2008, at 6:06 am, Mark Hahn wrote: I was hoping for some discussion of concrete issues. for instance, I have the impression debian uses something other than sysvinit - does that work out well? Debian uses standard sysvinit-style scripts in /etc/init.d, /etc/ rc0.d, ... thanks. I

Re: [Beowulf] automount on high ports

2008-07-02 Thread Henning Fehrmann
On Wed, Jul 02, 2008 at 09:19:50AM +0100, Tim Cutts wrote: > > On 2 Jul 2008, at 8:26 am, Carsten Aulbert wrote: > > >OK, we have 1342 nodes which act as servers as well as clients. Every > >node exports a single local directory and all other nodes can mount this. > > > >What we do now to optimiz

Re: [Beowulf] automount on high ports

2008-07-02 Thread Tim Cutts
On 2 Jul 2008, at 8:26 am, Carsten Aulbert wrote: OK, we have 1342 nodes which act as servers as well as clients. Every node exports a single local directory and all other nodes can mount this. What we do now to optimize the available bandwidth and IOs is spread millions of files according

Re: [Beowulf] automount on high ports

2008-07-02 Thread Carsten Aulbert
Hi Perry, Perry E. Metzger wrote: > > All NFS clients are connecting to a single port, not to a different > port for every NFS export. You do not need 1400 listening TCP ports on > a server to export 1400 different file systems. Only one port is > needed, whether you are exporting one file syste