-------------- Original message ----------------------
From: Toon Knapen <[EMAIL PROTECTED]>
> Greg Lindahl wrote:
> >
> > In real life (i.e. not HPC), everyone uses message passing between
> > nodes. So I don't see what you're getting at.
> >
>
> Many on this list suggest that using multiple MPI-processes on one and
> the same node is superior to MT approaches IIUC. However I have the
> impression that almost the whole industry is looking into MT to benefit
> from multi-core without even considering message-passing. Why is that so?
I think what Greg and others are really saying is that if you have to use a
distributed memory
model (MPI) as a first order response to meet your scalability requirements,
then
the extra coding effort and complexity required to create a hybrid code may not
be
a good performance return on your investment. If on the other hand you only
need to scale within a singe SMP node (with cores and sockets on a single
board growing in number, this returns more performance than in the past), then
you
may be able to avoid using MPI and chose a simpler model like OpenMP. If you
have already written an efficient MPI code, then (with some exceptions) the
performance-gain divided by the hybrid coding-effort may seem small.
Development in an SMP environment is easier. I know of a number of sights
that work this way. The experienced algorithm folks work up the code in
OpenMP on say an SGI Altix or Power6 SMP, then they get a dedicated MPI
coding expert to convert it later for scalable production operation on a
cluster.
In this situation, they do end up with hybrid versions in some cases. In
non-HPC
or smaller workgroup contexts your production code may not need to be converted.
Cheers,
rbw
--
"Making predictions is hard, especially about the future."
Niels Bohr
--
Richard Walsh
Thrashing River Consulting--
5605 Alameda St.
Shoreview, MN 55126
Phone #: 612-382-4620
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf