On Fri, 26 Sep 2008 at 6:28pm, Linus Harling wrote

Thank you for the input! The system is a Dell T605 2x4core Opteron 2354
with 32GB RAM. With a second one like it being added (probably using
ethernet) if need arises. Which means most/all of the communication will
be by shared memory to begin with.

The app is LS-DYNA, a FEM-suite from LSTC: http://www.ls-dyna.com/

I'm just a sysadmin tasked with installing the machine, and have limited
knowledge in math, but my guess is that the code is quite
communication-heavy considering the specs of the machine and an
assumption that FEM is essentially SIMD. Am I correct in my assumptions
and does anyone have any experience of different mpi-implementations on
such codes?

DYNA has 2 main solver types -- explicit (iteration based) and implicit (matrix inversion, essentially). If you're only using one system and the explicit solver, then there's no need for any MPI, as ls-dyna is multi-threaded. Last I knew, however, the implicit solver is only parallelized in mpp-dyna.

I've found the explicit solver to not be overly communications heavy. I.e., I've seen decent scaling using plain-jane GigE and very modest node counts. Unfortunately, I haven't done any benchmarks of the various MPI versions available. HTH.

--
Joshua Baker-LePain
QB3 Shared Cluster Sysadmin
UCSF
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to