Hi all,
Greg Lindahl wrote:
> On Wed, Jul 02, 2008 at 06:31:51PM -0400, Perry E. Metzger wrote:
>
>> It isn't quite that bad. You can use one of the SO_REUSE* calls in the
>> code to make things less dire. Apparently the kernel doesn't do that
>> for NFS client connection establishment, though. T
"Robert G. Brown" <[EMAIL PROTECTED]> writes:
> I'm not quite sure what you mean by "vast numbers of teeny high
> latency requests" so I'm not sure if we really are disagreeing or
> agreeing in different words.
I mostly have worried about such schemes in the case of, say, 10,000
people connecting
On Wed, Jul 02, 2008 at 08:48:32PM -0400, Lawrence Stewart wrote:
> Back in 1994, with 90 MHz pentiums, NCSA's httpd was the leading
> webserver with a design that forked a new process for every request.
Apache eventually moved to a model where forked processes handled
several requests serially b
Sure, but it is way inefficient. Every single process you fork means
another data segment, another stack segment, which means lots of
memory. Every process you fork also means that concurrency is
achieved
only by context switching, which means loads of expense on changing
MMU state and more.
On Wed, 2 Jul 2008, Perry E. Metzger wrote:
"Robert G. Brown" <[EMAIL PROTECTED]> writes:
Well, it actually kind of is. Typically, a box in an HPC cluster is
running stuff that's compute bound and who's primary job isn't serving
vast numbers of teeny high latency requests. That's much more wha
On Wed, 2 Jul 2008, David Mathog wrote:
Steve Cousins <[EMAIL PROTECTED]> wrote:
Do different LTO-3 drives have different maximum tape write speeds?
I don't know. I've always heard 80 MB/sec. lto.org shows:
http://www.lto.org/technology/ugen.php?section=0&subsec=ugen
for lto-3 "up to 160
On Wed, 2 Jul 2008, David Mathog wrote:
Rats.
I wonder what the difference is now? If you don't already have it,
please grab a copy of Exabyte's ltoTool from here:
http://www.exabyte.com/support/online/downloads/downloads.cfm?did=1344&prod_id=581
% /usr/local/src/ltotool/ltoTool -C 1 /dev/n
On Wed, Jul 02, 2008 at 06:31:51PM -0400, Perry E. Metzger wrote:
> It isn't quite that bad. You can use one of the SO_REUSE* calls in the
> code to make things less dire. Apparently the kernel doesn't do that
> for NFS client connection establishment, though. There is probably
> some code to fix
Greg Lindahl <[EMAIL PROTECTED]> writes:
> Go look at code that actually uses priv ports to connect out. Normally
> the port is picked in the connect() call, and that means you can have
> all the 4-tuples. But for priv ports, you have to loop trying specific
> candidate ports under 1024 until you
Steve Cousins <[EMAIL PROTECTED]> wrote:
> > Just under 60MB/sec seems to be the maximum tape transport read/write
> > limit. Pretty reliably the first write from the beginning of tape was a
> > bit slower than writes started further into the tape.
>
> I believe LTO-3 is rated at 80 MB/sec witho
Steve Cousins <[EMAIL PROTECTED]> wrote
> David Mathog wrote:
> > Just under 60MB/sec seems to be the maximum tape transport read/write
> > limit. Pretty reliably the first write from the beginning of tape was a
> > bit slower than writes started further into the tape.
>
> I believe LTO-3 is rate
On Wed, Jul 02, 2008 at 08:28:48AM -0400, Perry E. Metzger wrote:
> None
> of this should cause you to run out of ports, period. If you don't
> understand that, refer back to my original message. A TCP socket is a
> unique 4-tuple. The host:port 2-tuples are NOT unique and not an
> exhaustible res
2008/7/2 Perry E. Metzger <[EMAIL PROTECTED]>:
>
> "Robert G. Brown" <[EMAIL PROTECTED]> writes:
> >> Well, it actually kind of is. Typically, a box in an HPC cluster is
> >> running stuff that's compute bound and who's primary job isn't serving
> >> vast numbers of teeny high latency requests. Th
On Wed, Jul 2, 2008 at 4:32 AM, Perry E. Metzger <[EMAIL PROTECTED]> wrote:
> "Jon Aquilina" <[EMAIL PROTECTED]> writes:
>> if i use blender how nicely does it work in a cluster?
>
> I believe it works quite well.
As far as I know blender does not have any built-in "clustering"
capabilities. But
Just under 60MB/sec seems to be the maximum tape transport read/write
limit. Pretty reliably the first write from the beginning of tape was a
bit slower than writes started further into the tape.
I believe LTO-3 is rated at 80 MB/sec without compression. Testing it on
our HP unit in an Over
"Robert G. Brown" <[EMAIL PROTECTED]> writes:
>> Well, it actually kind of is. Typically, a box in an HPC cluster is
>> running stuff that's compute bound and who's primary job isn't serving
>> vast numbers of teeny high latency requests. That's much more what a
>> web server does. However...
>
>
On Wed, 2 Jul 2008, Perry E. Metzger wrote:
"Robert G. Brown" <[EMAIL PROTECTED]> writes:
On Wed, 2 Jul 2008, Perry E. Metzger wrote:
By the way, you can now design daemons to handle tens of thousands of
simultaneous connections with clean event driven design on a modern
multiprocessor with p
On Wed, 2 Jul 2008, Bogdan Costescu wrote:
On Wed, 2 Jul 2008, Robert G. Brown wrote:
The way TCP daemons that listen on a well-known/privileged port work is
that they accept a connection on that port, then fork a connection on a
higher unprivileged (>1023) port on both ends so that the daemo
Bogdan Costescu <[EMAIL PROTECTED]> writes:
> 'man 7 socket' and look up SO_REUSEADDR.
Incidently, I believe this may be part of the problem for the NFS
client code in Linux.
--
Perry E. Metzger[EMAIL PROTECTED]
___
Beowulf mailing lis
"Robert G. Brown" <[EMAIL PROTECTED]> writes:
> On Wed, 2 Jul 2008, Perry E. Metzger wrote:
>> By the way, you can now design daemons to handle tens of thousands of
>> simultaneous connections with clean event driven design on a modern
>> multiprocessor with plenty of memory. This is way off topic
On Wed, 2 Jul 2008, Perry E. Metzger wrote:
You don't switch to a different port number after the connection comes
in, you stay on it. You can in theory talk to up to (nearly) 2^48
different foreign host/port combos off of local port 25, because every
remote host/remote port pair makes for a dif
On Wed, 2 Jul 2008, Robert G. Brown wrote:
The way TCP daemons that listen on a well-known/privileged port work
is that they accept a connection on that port, then fork a
connection on a higher unprivileged (>1023) port on both ends so
that the daemon can listen once again.
'man 7 socket' an
On Jul 2, 2008, at 10:09 AM, Gerry Creager wrote:
Although I believe Lustre's robustness is very good these days, I
do not believe that it will not work in your setting. I think that
they currently do not recommend mounting a client on a node that is
also working as a server as you are doin
Mark Hahn wrote:
>> Hmmm for me, its all about the kernel. Thats 90+% of the battle.
>> Some distros use good kernels, some do not. I won't mention who I
>> think is in the latter category.
>
> I was hoping for some discussion of concrete issues. for instance,
> I have the impression debi
"Robert G. Brown" <[EMAIL PROTECTED]> writes:
> On Wed, 2 Jul 2008, Carsten Aulbert wrote:
>> Which corresponds exactly to the maximum achievable mounts of 358 right
>> now. Besides, I'm far from being an expert on TCP/IP, but is it possible
>> for a local process to bind to a port which is alread
Mark,
Would it be feasible to downclock your three nodes? All you physicists know
better than I, that the power draw and heat production are not linear in
GHz. A 1 GHz processor is less than half the cost per tick than a 2GHz, so
if power budget is more urgent for you than time to completion then t
Mark Hahn wrote:
does it necessarily have to be a redhat clone. can it also be a debian
based
clone?
>>>
>>> but why? is there some concrete advantage to using Debian?
>>> I've never understood why Debian users tend to be very True Believer,
>>> or what it is that hooks them.
>>
>>
On 7/2/08, Joe Landman <[EMAIL PROTECTED]> wrote:
> Hi Mark
>
> Mark Kosmowski wrote:
> > I'm in the US. I'm almost, but not quite ready for production runs -
> > still learning the software / computational theory. I'm the first
> > person in the research group (physical chemistry) to try to lear
On Wed, 2 Jul 2008, Carsten Aulbert wrote:
Which corresponds exactly to the maximum achievable mounts of 358 right
now. Besides, I'm far from being an expert on TCP/IP, but is it possible
for a local process to bind to a port which is already in use but to
another host? I don't think so, but may
"Robert G. Brown" <[EMAIL PROTECTED]> writes:
>> Precisely. It pays to allow people to use what they want. Fewer
>> religious battles that way. Whether one distro or another has an
>> advantage isn't the point -- people have their own tastes and it
>> doesn't pay to tell them "no" without good rea
On Tue, 1 Jul 2008, Perry E. Metzger wrote:
Prentice Bisbal <[EMAIL PROTECTED]> writes:
does it necessarily have to be a redhat clone. can it also be a debian
based
clone?
but why? is there some concrete advantage to using Debian?
I've never understood why Debian users tend to be very True
Bogdan Costescu <[EMAIL PROTECTED]> writes:
>> Every machine might get 1341 connections from clients, and every
>> machine might make 1341 client connections going out to other
>> machines None of this should cause you to run out of ports, period.
>
> With all due respect, I think that you are not
Henning Fehrmann <[EMAIL PROTECTED]> writes:
>> Which corresponds exactly to the maximum achievable mounts of 358 right
>
> 359 ;)
>
> If the number of mounts is smaller the ports are randomly used in this range.
> It would be convenient to enter the insecure area.
> Using the option insecure for
Skip to the bottom for advice on how to make NFS only use non-prived
ports. My guess is still that it isn't priv ports that are causing
trouble, but I describe at the bottom what you need to do to get rid
of that issue entirely. I'd advise reading the rest, but the part
about how to disable the st
Does your university have public computer labs? Do the computers run some
variant of Unix?
At UMN, where I did my grad work in physics, there were a number of
semi-public "Scientific Visualization" or "Large Data Analysis" labs that
were hosted in the local supercomputer center. The center there
On Wed, 2 Jul 2008, Perry E. Metzger wrote:
A given client would need to be forming over 1000 connections to a
given server NFS port for that to be a problem.
Not quite. The reserved ports that are free for use (512 and up) are
not all free to be taken by NFS as it pleases - there are many da
Scott Atchley wrote:
On Jul 2, 2008, at 7:22 AM, Carsten Aulbert wrote:
Bogdan Costescu wrote:
Have you considered using a parallel file system ?
We looked a bit into a few, but would love to get any input from anyone
on that. What we found so far was not really convincing, e.g. glusterFS
a
> Which corresponds exactly to the maximum achievable mounts of 358 right
359 ;)
If the number of mounts is smaller the ports are randomly used in this range.
It would be convenient to enter the insecure area.
Using the option insecure for the NFS exports is apparently not
sufficient. Also every
Hi Mark
Mark Kosmowski wrote:
I'm in the US. I'm almost, but not quite ready for production runs -
still learning the software / computational theory. I'm the first
person in the research group (physical chemistry) to try to learn
plane wave methods of solid state calculation as opposed to iso
I'm in the US. I'm almost, but not quite ready for production runs -
still learning the software / computational theory. I'm the first
person in the research group (physical chemistry) to try to learn
plane wave methods of solid state calculation as opposed to isolated
atom-centered approximation
like you said in regards to maya money is a factor for me. if i do descide
to setup a rendering cluster my problem is going to be finding someone who
can make a small video in blender for me so i can render it.
On 7/2/08, Greg Byshenk <[EMAIL PROTECTED]> wrote:
>
> On Wed, Jul 02, 2008 at 07:32:55
Hi Perry,
Perry E. Metzger wrote:
>
> Okay. In this instance, you're not going to run out of ports. Every
> machine might get 1341 connections from clients, and every machine
> might make 1341 client connections going out to other machines. None
> of this should cause you to run out of ports, per
Carsten Aulbert wrote:
The clients are connecting from ports below 1024 because Berkeley set
up a hack in the original BSD stack so that only root could open ports
below 1024. This way, you could "know" the process on the remote host
was a root process, thus you could feel "secure" [sic]. It doe
Carsten Aulbert <[EMAIL PROTECTED]> writes:
>> The clients are connecting from ports below 1024 because Berkeley set
>> up a hack in the original BSD stack so that only root could open ports
>> below 1024. This way, you could "know" the process on the remote host
>> was a root process, thus you co
On Jul 2, 2008, at 7:22 AM, Carsten Aulbert wrote:
Bogdan Costescu wrote:
Have you considered using a parallel file system ?
We looked a bit into a few, but would love to get any input from
anyone
on that. What we found so far was not really convincing, e.g.
glusterFS
at that time was no
Tim Cutts <[EMAIL PROTECTED]> writes:
> On 2 Jul 2008, at 8:26 am, Carsten Aulbert wrote:
>
>> OK, we have 1342 nodes which act as servers as well as clients. Every
>> node exports a single local directory and all other nodes can mount
>> this.
>>
>> What we do now to optimize the available bandwi
"Jon Aquilina" <[EMAIL PROTECTED]> writes:
> if i use blender how nicely does it work in a cluster?
I believe it works quite well.
Perry
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http:
On 2 Jul 2008, at 12:16 pm, Jon Aquilina wrote:
one thing must not be forgotten though. in regards to pkging stuff
for the
ubuntu variation once someone like you and me you upload it for
someone
higher up on the chain to check and upload to the servers. so
basically
someone is checking wh
Hi Bogdan,
Bogdan Costescu wrote:
>
> Have you considered using a parallel file system ?
We looked a bit into a few, but would love to get any input from anyone
on that. What we found so far was not really convincing, e.g. glusterFS
at that time was not really stable, lustre was too easy to cras
one thing must not be forgotten though. in regards to pkging stuff for the
ubuntu variation once someone like you and me you upload it for someone
higher up on the chain to check and upload to the servers. so basically
someone is checking what someone else has packaged.
On 7/2/08, Tim Cutts <[EMAI
im also not sure what support is like in other distros but i commend the
kubuntu volunteers who man that irc channel for support as well as those who
help with development. are there any other distros that provide support like
this?
On 7/2/08, Jon Aquilina <[EMAIL PROTECTED]> wrote:
>
> one thing
On Wed, 2 Jul 2008, Tim Cutts wrote:
The difficulty is that many ISVs tend to do a fairly terrible job of
packaging their applications as RPM's or DEB's
I very much agree with this. While you mentioned init scripts that
don't fit the distribution, I can add init scripts that are totally
miss
On Wed, 2 Jul 2008, Carsten Aulbert wrote:
OK, we have 1342 nodes which act as servers as well as clients. Every
node exports a single local directory and all other nodes can mount this.
Have you considered using a parallel file system ?
What we do now to optimize the available bandwidth and
Mark Hahn wrote:
[...]
but I ask again: what are the reasons one might prefer using debian?
really, I'm not criticizing it - I really would like to know why it
would matter whether someone (such as ClusterVisionOS (tm)) would use
debian or another distro.
Hello, Mark.
I've been on a well tro
On 2 Jul 2008, at 6:06 am, Mark Hahn wrote:
I was hoping for some discussion of concrete issues. for instance,
I have the impression debian uses something other than sysvinit -
does that work out well?
Debian uses standard sysvinit-style scripts in /etc/init.d, /etc/
rc0.d, ...
thanks. I
On Wed, Jul 02, 2008 at 09:19:50AM +0100, Tim Cutts wrote:
>
> On 2 Jul 2008, at 8:26 am, Carsten Aulbert wrote:
>
> >OK, we have 1342 nodes which act as servers as well as clients. Every
> >node exports a single local directory and all other nodes can mount this.
> >
> >What we do now to optimiz
On 2 Jul 2008, at 8:26 am, Carsten Aulbert wrote:
OK, we have 1342 nodes which act as servers as well as clients. Every
node exports a single local directory and all other nodes can mount
this.
What we do now to optimize the available bandwidth and IOs is spread
millions of files according
Hi Perry,
Perry E. Metzger wrote:
>
> All NFS clients are connecting to a single port, not to a different
> port for every NFS export. You do not need 1400 listening TCP ports on
> a server to export 1400 different file systems. Only one port is
> needed, whether you are exporting one file syste
58 matches
Mail list logo