On 6/5/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:

>> it seems that now we move right into this direction with GPUs
They are no good.

GPU's have no synchronisation between them which is needed for graph
reduction.

GPUs are intrinsically parallel devices and might work very well for
parallel graph reduction.

Also, they are slow when dealing with memory accesses. Some are slow on
conditional execution.

Ummm...they read and write memory blazingly fast. Modern games happily
run at 60Hz at hidef resolutions using multipass renders and combining
many layers of texture to render each pixel. And when you say 'some'
you must be referring to older devices.

Take a look at BrookGPU: http://graphics.stanford.edu/projects/brookgpu/
They have raytracer on GPU and it is SLOW because of high cost of tree
traversal.

And these guys have a fast ray tracer:
http://www.nvidia.com/page/gz_home.html so you have demonstrated that
some people can write SLOW ray tracers in one particular language.

I'm not convinced that GPUs are terribly suitable for graph reduction.
But not for the reasons you give. The main problem I see is that you
can't immediately read back memory you've just written because GPUs
use a streaming model, and they are limited in how many instructions
they can execute at a time. And those problems may already have gone
away by now as I'm slightly out of date in my knowledge of GPUs...
--
Dan
_______________________________________________
Haskell-Cafe mailing list
[email protected]
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to