On Apr 4, 2011, at 10:20 PM, Mark Hahn wrote: >> GPU's completely annihilate cpu's everywhere. > > this is complete nonsense. GPUs do very nicely on a quite narrow > set of problems. for a somewhat larger set of problems, they do OK, > but pretty "meh", really, considering. for many problems, GPUs are > irrelevant, whether that's because the problem uses too much > memory, or already scales well on non-GPU, or doesn't have a GPU- > friendly > structure. > >> 818 execution units that can do multiplication 32 x 32 bits == 64 >> bits. >> That kicks butt. bye bye cpu's. > > well, for your application, which is quite narrow.
Which is about any relevant domain where massive computation takes place. The number of algorithms that really profit bigtime from a lot of RAM, in some cases you can also replace by massive computation and a tad of memory, the cases where that cannot be the case are very rare. For those few cases you order a few nodes with massive RAM rather than big cpu power. yet majority of HPC calculations, especially if we add company codes there, the simulators and the oil, gas, car and aviation industry. So that makes 95% of all codes just need massive cpu power and can get away with relative small RAM sizes per compute unit. Not to confuse btw with a compute unit of AMD as that is just a small part of a gpu, speaking of redefinitions :) _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf