> Define "real" applications,

Something that produces tangible, scientifically useful results that would not 
have otherwise been realized without the availability and capability of that 
machine.


> but to give my guess at your question "But they didn't. Why?"
> 
> One word - cost

Well, that's the obvious (and universal) given.  But it's not a useful answer 
in this context.  Cost is always a limiting factor.  Optimizing capability 
within the budget envelope is the challenge.

Now to be fair, my question was somewhat leading (and my argument is somewhat 
reduction-to-the-absurd) but what if the system designers has reduced the 
number of CPU cores per node and used the money saved there to purchase 
additional GPU nodes.  Make the system really CPU light and GPU heavy.  You 
would be left with a something that would potentially have a higher HPL number 
while maintaining the overall system cost.  

But why didn't they?  Why instead did they spend their money on a things like a 
custom high-perf interconnect (which tends not to be a limiting factor in HPL 
performance) and lots of cores on each node?  And IIRC 20% of their nodes don't 
even have GPUs?


My point is that while GPUs are certainly a potent tool to use in HPC, trying 
to draw some sort of universal claim about their efficacy and general 
usefulness based upon a single contrived benchmark is essentially the same as 
trying to extrapolate any conclusion from a single data point.  Unfortunately 
many people in the media do not seem to have any reservations about doing 
exactly that.


Take care, and have a great weekend.

-b


_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to