Dear Amjad Ali,

Here is the MRTG  I/O of a CFD code running on a parallel MPI beowulf cluster node with Gigabit ethernet,  It ended at about 3 o'clock.
A lot of I/O chatter. This is the same on all 4 parallel nodes that were running the CFD code. (4 nodes *5Mbs per node  = ~20Mbs bandwidth ignoring
wait times, I am still learning how to measure the latency and wait times while a job runs.).



Ganglia Memory usage is 2Gb per node and  2 processors per node.  



Looks like you could get by with 1GB of memory.

------
Sincerely,

  Tom Pierce



"amjad ali" <[EMAIL PROTECTED]>
Sent by: [EMAIL PROTECTED]

06/15/2006 04:02 AM

To
beowulf@beowulf.org
cc
Subject
[Beowulf] Slection from processor choices; Requesting Giudence





Hi ALL

We are going to build a true Beowulf cluster for Numerical Simulation of Computational Fluid Dynamics (CFD) models at our university. My question is that what is the best choice for us out of the following choices about processors for a given fixed/specific amount/budget:
1.        One processor at each of the compute nodes
2.        Two processors (on one mother board) at each of the compute nodes
3.        Two Processors (each one dual-core processor) (total 4 cores on the board) at each compute nodes.
4.        four processor (on one mother board) at each of the compute nodes.

Initially, we are deciding to use Gigabit ehternet switch and 1GB of RAM at each node.

Please guide me that how much parallel programming will differ for the above four choices of processing nodes.

with best regards:
Amjad Ali.
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to