------- Comment #2 from rob1weld at aol dot com 2009-05-18 17:36 ------- (In reply to comment #1) > Yes GPU libraries would be nice but this needs a lot of work to begin with. > First you have to support the GPUs. This also amounts to doubling the > support. If you really want them, since this is open source, start > contributing.
I'm planning a full hardware upgrade in the coming months. I plan to get an expensive Graphics Card to try this. Some of the newest cards will run at over a PetaFLOP (only for "embarrassingly parallel" code - http://en.wikipedia.org/wiki/Embarrassingly_parallel ). Some of the newest Motherboards will accept _FOUR_ Graphics Cards. It seems less expensive to use GPUs and recompile a few apps than trying to purchase a Motherboard with multiple CPUs or trying to find a chip faster than the 'i7'. If we could "only double" our Computer's speed this endeavor would be well worth doing. I suspect that Fortran's vector math could be easily converted and benefit greatly. Look for this feature in gcc in a few years (Sooner with everyone's help). Rob -- http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40028