Seems the next CELL is 100% confirmed double precision.
Yet if you look back in history, Nvidia promises on this can be found years back.

The only problem with hardware like Tesla is that it is rather hard to
get technical information; like which instructions does Tesla support in hardware?

This is crucial to know in order to speedup your code.
It is already tough to get realworld codes on GPU's faster than at cpu's.
The equivalent CPU code has been optimized real bigtime,
knowing everything about hardware.

How fast is latency from RAM when all 128 SP's are busy with that?

Nvidia gives out zero information and doesn't support anyone either for this.

That has to change in order to get GPU calculations more into mainstream.

When i calculate on paper for some applications, a GPU can be potentially
factor 4-8 faster than a standard quadcore 2.4ghz is right now.

Getting that performance out of the GPU is more than a fulltime task however,
without having indepth technical hardware data on the GPU.

Vincent

On May 5, 2008, at 9:40 PM, John Hearns wrote:

On Fri, 2008-05-02 at 14:05 +0100, Ricardo Reis wrote:
Does anyone knows if/when there will be double floating point on those
little toys from nvidia?


Ricardo,
  I think CUDA is a gret concept, and am starting to work with it at
home.
I recently went to a talk by David Kirk, as part of the "world tour".
I think the answer to your question is Real Soon Now.

_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf


_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to