As we know by now GPUs can run some problems many times faster than CPUs
it's good to cultivate some skepticism. the paper that quotes 40x
does so with a somewhat tilted comparison. (I consider this comparison
fair: a host with 2x 3.2 GHz QC Core2 vs 1 current high-end CPU card.
former delivers 102.4 SP Gflops; latter is something like 1.2 Tflop.
those are all peak/theoretical. the nature of the problem determines
how much slower real workloads are - I suggest that as not-suited-ness
increases, performance falls off _faster_ for the GPU.)
what I understand GPUs are useful only with certain classes of numerical
problems and discretization schemes, and of course the code must be
I think it's fair to say that GPUs are good for graphics-like loads,
or more generally: fairly small data, accessed data-parallel or with
very regular and limited sharing, with high work-per-data.
I'm part of a group that is purchasing our first beowulf cluster for a
climate model and an estuary model using Chombo
(http://seesar.lbl.gov/ANAG/chombo/). Getting up to speed (ha) on
offhand, I'd guess that adaptive grids will be substantially harder
to run efficiently on a GPU than a uniform grid.
than others? Given the very substantial speed improvements with GPUs,
will there be a movement to GPU clusters, even if there is a substantial
cost in problem reformulation? Or are GPUs only suitable for a rather
narrow range of numerical problems?
GP-GPU tools are currently immature, and IMO the hardware probably needs
a generation of generalization before it becomes really widely used.
OTOH, GP-GPU has obviously drained much of the interest away from eg
FPGA computation. I don't know whether there is still enough interest
in vector computers to drain anything...
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf