WRF has been under development for 10 years. It's got an OpenMP flavor,
an MPI flavor and a hybrid one. We still don't have all the bugs worked
out of the hybrid so that it can handle large, high resolution domains
without being slower than the MPI version. And, yeah, the OpenMP geeks
working on this... and the MPI folks, are good.
Hybrid isn't easy and isn't always foolproof. And, as another thought,
OpenMP isn't always the best solution to the problem.
gerry
[EMAIL PROTECTED] wrote:
-------------- Original message ----------------------
From: Toon Knapen <[EMAIL PROTECTED]>
Greg Lindahl wrote:
In real life (i.e. not HPC), everyone uses message passing between
nodes. So I don't see what you're getting at.
Many on this list suggest that using multiple MPI-processes on one and
the same node is superior to MT approaches IIUC. However I have the
impression that almost the whole industry is looking into MT to benefit
from multi-core without even considering message-passing. Why is that so?
I think what Greg and others are really saying is that if you have to use a
distributed memory
model (MPI) as a first order response to meet your scalability requirements,
then
the extra coding effort and complexity required to create a hybrid code may not
be
a good performance return on your investment. If on the other hand you only
need to scale within a singe SMP node (with cores and sockets on a single
board growing in number, this returns more performance than in the past), then
you
may be able to avoid using MPI and chose a simpler model like OpenMP. If you
have already written an efficient MPI code, then (with some exceptions) the
performance-gain divided by the hybrid coding-effort may seem small.
Development in an SMP environment is easier. I know of a number of sights
that work this way. The experienced algorithm folks work up the code in
OpenMP on say an SGI Altix or Power6 SMP, then they get a dedicated MPI
coding expert to convert it later for scalable production operation on a
cluster.
In this situation, they do end up with hybrid versions in some cases. In
non-HPC
or smaller workgroup contexts your production code may not need to be converted.
Cheers,
rbw
--
"Making predictions is hard, especially about the future."
Niels Bohr
--
Richard Walsh
Thrashing River Consulting--
5605 Alameda St.
Shoreview, MN 55126
Phone #: 612-382-4620
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
--
Gerry Creager -- [EMAIL PROTECTED]
Texas Mesonet -- AATLT, Texas A&M University
Cell: 979.229.5301 Office: 979.458.4020 FAX: 979.862.3983
Office: 1700 Research Parkway Ste 160, TAMU, College Station, TX 77843
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf