On Tue, 23 May 2006, Joachim Worringen wrote: > The cabling is already quite tricky for 3D setups. I don't think you > would like to go beyond. There are good reasons why nobody has done > it yet.
It's not clear if you refer to a large cluster or include the "experimental" ones as well, in which case I'd like to point to: http://www.sicmm.org/vrana.html where under the "2000" entry you can find a mention of a 6D one. But I wonder how such a system (more than 2 HPC NICs per node) would work now ? Has any interconnect vendor attempted to install and use succesfully more than 2 NICs per computer ? How were the connected to the underlying bus(es) (PCI-X, PCI-E, maybe even HyperTransport) ? What's the gain with respect to the 1 NIC + switch case ? (from all points of view like price, latency when all NICs in a computer communicate simultaneously, CPU usage, etc.) -- Bogdan Costescu IWR - Interdisziplinaeres Zentrum fuer Wissenschaftliches Rechnen Universitaet Heidelberg, INF 368, D-69120 Heidelberg, GERMANY Telephone: +49 6221 54 8869, Telefax: +49 6221 54 8868 E-mail: [EMAIL PROTECTED] _______________________________________________ Beowulf mailing list, [email protected] To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
