Has anyone tested scaling of NAMD/CUDA over QLogic or ConnectX QDR
interconnects for a large number of IB cards and GPUs? I've listened to John
Stone's presentation on VMD and NAMD CUDA acceleration. The consensus I
brought away from the presentation was that one QDR per GPU would probably be
necessary to scale efficiently. The 60 node, 60 GPU, DDR IB enabled cluster
that was used for initial testing was saturating the interconnect. Later tests
on the new GT200 based cards show even more performance gains for the GPUs. 1
GPU performing the work of 12 CPUs or 8 CPUs equaling 96 cores were the numbers
I saw. So with a ratio of 1gpu/12cores, interconnect performance will be very
important.
Thanks,
Dow
__________________________________
Dow P. Hurst, Research Scientist
Department of Chemistry and Biochemistry
University of North Carolina at Greensboro
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf