CUDA comes with a full BLAS and FFT library (for 1D,2D and 3D transforms).

I read the CUDA doc, but I guess I was focusing on the language itself.

You can have relevant speed up  even for 2D transforms or for a batch of 1Ds.

I assume this is only single-precision, and I would guess that for numerical stability, you must be limited to fairly short fft's. what kind of peak flops do you see? what's the overhead of shoving data onto the GPU, and getting it back? (or am I wrong that the GPU cannot do an FFT in main (host) memory?

You can offload only compute intendive parts of your code to the GPU
from C and C++ ( writing a wrapper from Fortran should be trivial).

sure, but what's the cost (in time and CPU overhead) to moving data around like this?

The current generation of the hardware supports only single precision,
but there will be a double precision version towards the end of the
year.

do you mean synthetic doubles?  I'm guessing that the hardware isn't
going to gain the much wider multipliers necessary to support doubles at the same latency as singles...

PS: I work on CUDA at Nvidia, so I may be a little biased...

I did guess from the nvidia-limited nature of your reply,
but thanks for confirming it.

as far as I know, there are not any well-developed libraries which simply

by "well-developed", I did also mean "runs on any GPU or at least not a
single vendor"...
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to