> On Behalf Of Rahul Nabar
> 
> Rmax/Rpeak= 0.83 seems a good guess based on one very similar system
> on the Top500.
> 
> Thus I come up with a number of around 1.34 TeraFLOPS for my cluster
> of 24 servers.  Does the value seem reasonable ballpark? Nothing too
> accurate but I do not want to be an order of magnitude off. [maybe  a
> decimal mistake in math! ]

You're in the right ballpark.  I recently got 0.245 Tflops on HPL on a 4-node 
version of what you have (with Goto BLAS), so 6x that # is in the same ballpark 
as your 1.34 TF/s estimate.  My CPUs were 2.3 GHz Opteron 2356 instead of your 
2.2 GHz.  

Greg is also right on the memory size being a factor allowing larger N to be 
used for HPL.  I used a pretty small N on this HPL run since we were running it 
as part of a  HPC Challenge suite run, and a smaller N can be better for PTRANS 
if you are interested in the non-HPL parts of HPCC (as I was).

> All 64 bit machines with a dual channel
> bonded Gigabit ethernet interconnect. AMD Quad-Core AMD Opteron(tm)
> Processor 2354.

As others have said, 50% is a more likely HPL efficiency for a large GigE 
cluster, but with your smallish cluster (24 nodes) and bonded channels, you 
would probably get closer to 80% than 50%.

-Tom

> 
> 
> PS.  The Athelon was my typo, earlier sorry!
> 
> --
> Rahul
> _______________________________________________
> Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin
> Computing
> To change your subscription (digest mode or unsubscribe) visit
> http://www.beowulf.org/mailman/listinfo/beowulf

_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to