Hallo Jaime, Montag, 5. Mai 2008, meintest Du:
JP> Hello, JP> Just a small question, does anybody has experience with many core JP> (16) nodes and infiniband? Since we have some users that need JP> shared memory but also we want to build a normal cluster for JP> mpi apps, we think that this could be a solution. Let's say about JP> 8 machines (96 processors) pus infiniband. Does it sound correct? JP> I'm aware of the bottleneck that means having one ib interface for JP> the mpi cores, is there any possibility of bonding? Bonding (or multi-rail) does not make sense with "standard IB" in PCIe x8 since the PCIe connection limits the transfer rate of a single IB-Link already. My hint would be to go for Infinipath from QLogic or the new ConnectX from Mellanox since message rate is probably your limiting factor and those technologies have a huge advantage over standard Infiniband SDR/DDR. Infinipath and ConnectX are available as DDR Infiniband and provide a bandwidth of more than 1800 MB/s. Cheers, Jan
_______________________________________________ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf