Bill Rankin wrote:
Douglas:

[...]
What this machine does do is validate to some extent the continued
use and development of GPUs in an HPC/cluster setting.
[...]
Nvidia claims Tianhe-1A's 4.04 megawatts of CUDA GPUs and Xeon CPUs is
three times more power efficient than CPUs alone.  The Nvidia press
release is at http://bit.ly/d9VNtY

Numbers game.  Lies, damned lies, and benchmarks. :-)

I image if they had quartered the number of CPU cores and doubled the number of GPUs per 
node, they could have gotten even larger HPL numbers without significantly increasing 
their power footprint.  But they didn't.  Why?  Perhaps because even though it would have 
been "more powerful" on paper, it probably would not run real applications any 
faster.
Define "real" applications, but to give my guess at your question "But they didn't. Why?"

One word - cost
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to