Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-18 Thread Greg Lindahl
On Fri, Jun 16, 2006 at 04:27:50PM -0700, J. Andrew Rogers wrote: > You exaggerate the limitations of GigE. Current incarnations of GigE > can be close enough to Myrinet that it is perfectly functional for > some applications and pretty competitive for someone willing to > extract the capab

Re: [Beowulf] Acceptable rad limits for cluster rooms?

2006-06-18 Thread Greg Lindahl
On Fri, Jun 16, 2006 at 03:21:56PM -0600, Brian Oborn wrote: > The cluster for our Physics department is next to a room that, at the > time of installation, was an empty accelerator hall. I'd test it. If you can find the count of single-bit upsets, even one machine for a week or two will give yo

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-18 Thread laytonjb
Thomas H Dr Pierce <[EMAIL PROTECTED]> wrote: > Dear Amjad Ali, > > Here is the MRTG I/O of a CFD code running on a parallel MPI beowulf > cluster node with Gigabit ethernet, It ended at about 3 o'clock. > A lot of I/O chatter. This is the same on all 4 parallel nodes that were > running

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-18 Thread J. Andrew Rogers
On Jun 16, 2006, at 3:00 PM, Vincent Diepeveen wrote: Jeff, we know how some people can mess up installs, but if you have gigabit ethernet, with a one way pingpong latency of like 50-100 us if you're lucky, which is not using DMA i guess by default; in short nonstop interrupting your cpu, ver

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-18 Thread Ron Brightwell
> > > - The network/MPI combination is fairly critical to good performance and > > to > > price/performance. I have done some benchmarks where the right MPI library > > on GigE produces faster results than a bad MPI library on Myrinet. Seems > > counter-intuitive, but I've seen it. > > Jeff > >

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-18 Thread Thomas H Dr Pierce
Dear Amjad Ali, Here is the MRTG  I/O of a CFD code running on a parallel MPI beowulf cluster node with Gigabit ethernet,  It ended at about 3 o'clock. A lot of I/O chatter. This is the same on all 4 parallel nodes that were running the CFD code. (4 nodes *5Mbs per node  = ~20Mbs bandwidth ignori

[Beowulf] Acceptable rad limits for cluster rooms?

2006-06-18 Thread Brian Oborn
The cluster for our Physics department is next to a room that, at the time of installation, was an empty accelerator hall. However, a new electron accelerator has been installed and the cluster room is now a mild radiation area. Before we start considering shielding options, I was wondering if

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-18 Thread amjad ali
Hi All, first thanks to all for reponding. After going through the reponses, I personally feel attraction towards opting "Two AMD Opteron Processors (each one dual-core processor) (total 4 cores on the board) at each compute nodes". Moreover using 2 GB of RAM at each node. Any suggestion about us

[Beowulf] Re: [MPICH] EOF from console

2006-06-18 Thread Matthew Fowler
Hi Philip. The boards actually have two LAN interfaces. I tried bringing down the 2nd like you suggested, but I have the same problem. Here is the output of mpdcheck -v, I get the same respective output from all the boards im using: # mpdcheck -v mpdcheck -v obtaining hostname via gethostna

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-18 Thread Krugger
The number of cpu/cores per motherboard also has to do with the remaining infrastructure. Can you cool them? The more cpu per board means more heat in a smaller room. Can you provide power? Is you electrical infrastructure able to support all the nodes in full. Can you afford it? You have to fact

Re: [Beowulf] MPICH - ssh config question

2006-06-18 Thread Krugger
You want to use public key auth. For this to work you have to copy your ~/.ssh/id_rsa.pub to ~/.ssh/authorized_keys in the remote host. If you don't have this file you need to generate it with "ssh-keygen -t rsa" .And check to see that PubkeyAuthentication is set to yes in the /etc/ssh/sshd_config

[Beowulf] WORKS 2006 Call for Participation

2006-06-18 Thread Douglas L Thain
Panel on "Workflow as the Methodology of Science" Tuesday June 20 2006 WORKS Workshop HPDC Paris France 12pm - 1.30pm http://www.isi.edu/works06 Moderator Geoffrey Fox A recent NSF workshop http://vtcpc.isi.edu/wiki/index.php/Main_Page proposed that workflow could be viewed as underlying su

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-18 Thread Douglas Eadline
>> >> desktop (32 bit PCI) cards. I managed to get 14.6 HPL GFLOPS >> >> and 4.35 GROMACS GFLOPS out of 8 nodes consisting of hardware >> > ... >> >> As a point of reference, a quad opteron 270 (2GHz) reported >> >> 4.31 GROMACS GFLOPS. >> > >> > that's perplexing to me, since the first cluster ha