Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-19 Thread Douglas Eadline
For my tests I used version GROMACS 3.3 The Gromacs page does not mention what version was used for the benchmarks reported in their table. -- Doug > On Sat, 2006-06-17 at 11:34, Mark Hahn wrote: >> > >> desktop (32 bit PCI) cards. I managed to get 14.6 HPL GFLOPS >> > >> and 4.35 GROMACS GF

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-19 Thread Kevin Ball
On Sat, 2006-06-17 at 11:34, Mark Hahn wrote: > > >> desktop (32 bit PCI) cards. I managed to get 14.6 HPL GFLOPS > > >> and 4.35 GROMACS GFLOPS out of 8 nodes consisting of hardware > > > ... > > >> As a point of reference, a quad opteron 270 (2GHz) reported > > >> 4.31 GROMACS GFLOPS. > > > > >

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-19 Thread J. Andrew Rogers
On Jun 18, 2006, at 8:00 PM, Greg Lindahl wrote: On Fri, Jun 16, 2006 at 04:27:50PM -0700, J. Andrew Rogers wrote: You exaggerate the limitations of GigE. Current incarnations of GigE can be close enough to Myrinet that it is perfectly functional for some applications and pretty competitive f

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-19 Thread Scott Atchley
On Jun 16, 2006, at 1:43 AM, <[EMAIL PROTECTED]> <[EMAIL PROTECTED]> wrote: Initially, we are deciding to use Gigabit ehternet switch and 1GB of RAM at each node. that seems like an odd choice. it's not much ram, and gigabit is extremely slow (relative to alternatives, or in comparison to

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-19 Thread Joachim Worringen
[EMAIL PROTECTED] wrote: Sure. This is for one specific CFD code. Let's assum that MPICH 1 is the basline. MPICH2 was twice as fast (half the run time). LAM was 30% faster than MPICH2. Scali MPI Connect was 20% faster than LAM. Altogether, Scali MPI Connect was twice as fast as MPICH1. Hmmm..

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-18 Thread Greg Lindahl
On Fri, Jun 16, 2006 at 04:27:50PM -0700, J. Andrew Rogers wrote: > You exaggerate the limitations of GigE. Current incarnations of GigE > can be close enough to Myrinet that it is perfectly functional for > some applications and pretty competitive for someone willing to > extract the capab

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-18 Thread laytonjb
Thomas H Dr Pierce <[EMAIL PROTECTED]> wrote: > Dear Amjad Ali, > > Here is the MRTG I/O of a CFD code running on a parallel MPI beowulf > cluster node with Gigabit ethernet, It ended at about 3 o'clock. > A lot of I/O chatter. This is the same on all 4 parallel nodes that were > running

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-18 Thread J. Andrew Rogers
On Jun 16, 2006, at 3:00 PM, Vincent Diepeveen wrote: Jeff, we know how some people can mess up installs, but if you have gigabit ethernet, with a one way pingpong latency of like 50-100 us if you're lucky, which is not using DMA i guess by default; in short nonstop interrupting your cpu, ver

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-18 Thread Ron Brightwell
> > > - The network/MPI combination is fairly critical to good performance and > > to > > price/performance. I have done some benchmarks where the right MPI library > > on GigE produces faster results than a bad MPI library on Myrinet. Seems > > counter-intuitive, but I've seen it. > > Jeff > >

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-18 Thread Thomas H Dr Pierce
PROTECTED]> Sent by: [EMAIL PROTECTED] 06/15/2006 04:02 AM To beowulf@beowulf.org cc Subject [Beowulf] Slection from processor choices; Requesting Giudence Hi ALL We are going to build a true Beowulf cluster for Numerical Simulation of Computational Fluid Dynamics (CFD) models

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-18 Thread amjad ali
Hi All, first thanks to all for reponding. After going through the reponses, I personally feel attraction towards opting "Two AMD Opteron Processors (each one dual-core processor) (total 4 cores on the board) at each compute nodes". Moreover using 2 GB of RAM at each node. Any suggestion about us

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-18 Thread Krugger
The number of cpu/cores per motherboard also has to do with the remaining infrastructure. Can you cool them? The more cpu per board means more heat in a smaller room. Can you provide power? Is you electrical infrastructure able to support all the nodes in full. Can you afford it? You have to fact

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-18 Thread Douglas Eadline
>> >> desktop (32 bit PCI) cards. I managed to get 14.6 HPL GFLOPS >> >> and 4.35 GROMACS GFLOPS out of 8 nodes consisting of hardware >> > ... >> >> As a point of reference, a quad opteron 270 (2GHz) reported >> >> 4.31 GROMACS GFLOPS. >> > >> > that's perplexing to me, since the first cluster ha

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-17 Thread Joe Landman
Geoff Jacobs wrote: Well, each Opteron core would have to split it's local memory pool with it's sister, so pure bandwidth would be similar. The memory controller on the Opteron would give a latency bonus, but the registered DIMMs would incur a penalty. The Socket A motherboards are using an SIS

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-17 Thread Geoff Jacobs
Mark Hahn wrote: desktop (32 bit PCI) cards. I managed to get 14.6 HPL GFLOPS and 4.35 GROMACS GFLOPS out of 8 nodes consisting of hardware >>> ... As a point of reference, a quad opteron 270 (2GHz) reported 4.31 GROMACS GFLOPS. >>> that's perplexing to me, since the first clust

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-17 Thread Mark Hahn
> >> desktop (32 bit PCI) cards. I managed to get 14.6 HPL GFLOPS > >> and 4.35 GROMACS GFLOPS out of 8 nodes consisting of hardware > > ... > >> As a point of reference, a quad opteron 270 (2GHz) reported > >> 4.31 GROMACS GFLOPS. > > > > that's perplexing to me, since the first cluster has semp/

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-17 Thread Geoff Jacobs
Mark Hahn wrote: >> GigE is not perfect. My point is that for many applications >> it can work well. > > for many apps of certain job sizes... > >> desktop (32 bit PCI) cards. I managed to get 14.6 HPL GFLOPS >> and 4.35 GROMACS GFLOPS out of 8 nodes consisting of hardware > ... >> As a point of

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-17 Thread Mark Hahn
> GigE is not perfect. My point is that for many applications > it can work well. for many apps of certain job sizes... > desktop (32 bit PCI) cards. I managed to get 14.6 HPL GFLOPS > and 4.35 GROMACS GFLOPS out of 8 nodes consisting of hardware ... > As a point of reference, a quad opteron 270

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-17 Thread Joe Landman
Greg Lindahl wrote: > On Sat, Jun 17, 2006 at 02:36:22AM +0100, Vincent Diepeveen wrote: > >> Several persons replied and not a SINGLE ONE of them talks about >> one way pingpong latency, which is one of the most important features >> of a highend network for *many* applications. > > Actually s

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-17 Thread Douglas Eadline
> > Several persons replied and not a SINGLE ONE of them talks about > one way pingpong latency, which is one of the most important features > of a highend network for *many* applications. Sigh, as I mentioned in my past post, go to the link below Understand that it requires reading skills. htt

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-17 Thread Greg Lindahl
On Sat, Jun 17, 2006 at 02:36:22AM +0100, Vincent Diepeveen wrote: > Several persons replied and not a SINGLE ONE of them talks about > one way pingpong latency, which is one of the most important features > of a highend network for *many* applications. Actually several of them did mention it, bu

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-16 Thread Geoff Jacobs
[EMAIL PROTECTED] wrote: > Sure. This is for one specific CFD code. Let's assum that MPICH 1 is the > basline. > MPICH2 was twice as fast (half the run time). LAM was 30% faster than MPICH2. > Scali MPI Connect was 20% faster than LAM. Altogether, Scali MPI Connect > was twice as fast as MPICH1.

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-16 Thread Vincent Diepeveen
J. Andrew Rogers" <[EMAIL PROTECTED]> To: "Vincent Diepeveen" <[EMAIL PROTECTED]> Cc: Sent: Saturday, June 17, 2006 12:27 AM Subject: Re: [Beowulf] Slection from processor choices; Requesting Giudence On Jun 16, 2006, at 3:00 PM, Vincent Diepeveen wrote: Jeff

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-16 Thread Douglas Eadline
> >> - The network/MPI combination is fairly critical to good performance and >> to >> price/performance. I have done some benchmarks where the right MPI >> library >> on GigE produces faster results than a bad MPI library on Myrinet. Seems >> counter-intuitive, but I've seen it. >> Jeff > > Jeff,

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-16 Thread Vincent Diepeveen
- The network/MPI combination is fairly critical to good performance and to price/performance. I have done some benchmarks where the right MPI library on GigE produces faster results than a bad MPI library on Myrinet. Seems counter-intuitive, but I've seen it. Jeff Jeff, we know how some peop

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-16 Thread laytonjb
Geoff Jacobs <[EMAIL PROTECTED]> wrote: > [EMAIL PROTECTED] wrote: > > > - The network/MPI combination is fairly critical to good performance and to > > price/performance. I have done some benchmarks where the right MPI library > > on GigE produces faster results than a bad MPI library on My

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-16 Thread Geoff Jacobs
[EMAIL PROTECTED] wrote: > - The network/MPI combination is fairly critical to good performance and to > price/performance. I have done some benchmarks where the right MPI library > on GigE produces faster results than a bad MPI library on Myrinet. Seems > counter-intuitive, but I've seen it. > -

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-15 Thread laytonjb
> > > Initially, we are deciding to use Gigabit ehternet switch and 1GB of > > >RAM at > > >each node. > > that seems like an odd choice. it's not much ram, and gigabit is > extremely slow (relative to alternatives, or in comparison to on-board > memory access.) This is a common misconceptio

RE: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-15 Thread laytonjb
5, 2006 6:07 AM > To: amjad ali > Cc: beowulf@beowulf.org > Subject: Re: [Beowulf] Slection from processor choices; Requesting > Giudence > > In message from "amjad ali" <[EMAIL PROTECTED]> (Thu, 15 Jun 2006 > 04:02:12 -0400): > >Hi ALL > > > >We ar

RE: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-15 Thread Michael Will
Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Mikhail Kuzminsky Sent: Thursday, June 15, 2006 6:07 AM To: amjad ali Cc: beowulf@beowulf.org Subject: Re: [Beowulf] Slection from processor choices; Requesting Giudence In message from "amjad ali" <[EMAIL PROTECT

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-15 Thread Vincent Diepeveen
e expensive though.   Good luck, Vincent     - Original Message - From: amjad ali To: beowulf@beowulf.org Sent: Thursday, June 15, 2006 9:02 AM Subject: [Beowulf] Slection from processor choices; Requesting Giudence Hi ALL We are going to build a true Beowul

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-15 Thread Mark Hahn
> > 1. One processor at each of the compute nodes > > 2. Two processors (on one mother board) at each of the compute nodes > > 3. Two Processors (each one dual-core processor) (total 4 cores on > > 4. four processor (on one mother board) at each of the compute nodes. not considering a 4x2

Re: [Beowulf] Slection from processor choices; Requesting Giudence

2006-06-15 Thread Mikhail Kuzminsky
In message from "amjad ali" <[EMAIL PROTECTED]> (Thu, 15 Jun 2006 04:02:12 -0400): Hi ALL We are going to build a true Beowulf cluster for Numerical Simulation of Computational Fluid Dynamics (CFD) models at our university. My question is that what is the best choice for us out of the followin

[Beowulf] Slection from processor choices; Requesting Giudence

2006-06-15 Thread amjad ali
Hi ALL We are going to build a true Beowulf cluster for Numerical Simulation of Computational Fluid Dynamics (CFD) models at our university. My question is that what is the best choice for us out of the following choices about processors for a given fixed/specific amount/budget: One processor at