On 1/28/10 12:00 PM, Jon Forrest <jlforr...@berkeley.edu> wrote:
A GPU cluster is different from a traditional
HPC cluster in several ways:

1) The CPU speed and number of cores are not that important because most of the computing will be done inside the GPU.
The GPU will be doing the specific operations called in the application but you need enough CPU to keep the memory operations and PCI I/O handled to feed the GPU.

2) Serious GPU boards are large enough that they don't easily fit into standard 1U pizza boxes. Plus, they require more power than the standard power supplies in such boxes can provide. I'm not familiar with the boxes that therefore should be used in a GPU cluster.
There are 1U systems designs that use a passively cooled GPU that relies on chassis cooling infrastructure like CPUs do that work well. They are matched with the correct power supply size to support the GPU as well as the CPUs, memory, disk, etc.
3) Ideally, I'd like to put more than one GPU card in each computer node, but then I hit the issues in #2 even harder.
Not in a 1U system unless you use the nVidia S1070 external GPU chassis. Even then, if your application can be bottlenecked by having less than full PCIex16 bandwidth to the GPUs then the S1070 approach would be less than optimal compared to a system that had two dedicated, full speed PCIex16 slots.

--Jeff

--
------------------------------
Jeff Johnson
Manager
Aeon Computing

jeff.john...@aeoncomputing.com
www.aeoncomputing.com
t: 858-412-3810   f: 858-412-3845
m: 619-204-9061

4905 Morena Boulevard, Suite 1313 - San Diego, CA 92117

_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to