----- "Kilian CAVALOTTI" <[EMAIL PROTECTED]> wrote: > AFAIK, the multi GPU Tesla boxes contain up to 4 Tesla processors, but > are hooked to the controlling server with only 1 PCIe link, right? > Does this spell like "bottleneck" to anyone?
The nVidia website says: http://www.nvidia.com/object/tesla_tech_specs.html # 6 GB of system memory (1.5 GB dedicated memory per GPU) [...] # Connects to host via cabling to a low power PCI # Express x8 or x16 adapter card So my guess is that you'd be using local RAM not the host systems RAM whilst computing. I took a photo of an open Tesla box at SC'07: http://flickr.com/photos/chrissamuel/2267613381/in/set-72157603919719911/ (click on "All sizes" for a larger version), my guess is that the DIMMS are hidden under the shrouds. There's a lot of fans there.. -- Christopher Samuel - (03) 9925 4751 - Systems Manager The Victorian Partnership for Advanced Computing P.O. Box 201, Carlton South, VIC 3053, Australia VPAC is a not-for-profit Registered Research Agency _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf