Alan Louis Scheinine skrev: > It depends very much on hardware and on the specific program. > My experience with MPICH on Ethernet and MVAPICH, MVAPICH2, > OpenMPI on Infiniband is that different programs find one > or the other better, no clear winner. Also, small changes > in some options that one learns about from other sites or > from Google can make big differences in performance.
Thank you for the input! The system is a Dell T605 2x4core Opteron 2354 with 32GB RAM. With a second one like it being added (probably using ethernet) if need arises. Which means most/all of the communication will be by shared memory to begin with. The app is LS-DYNA, a FEM-suite from LSTC: http://www.ls-dyna.com/ I'm just a sysadmin tasked with installing the machine, and have limited knowledge in math, but my guess is that the code is quite communication-heavy considering the specs of the machine and an assumption that FEM is essentially SIMD. Am I correct in my assumptions and does anyone have any experience of different mpi-implementations on such codes? /Linus H > I regret that I have no organized documentation or performance > tables that could provide a more precise answer. > Alan Scheinine _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf