On Tue, Oct 02, 2007 at 02:52:21PM -0400, Larry Stewart wrote:
> On the other hand, here's a fellow who got a 4X speedup by going to hybrid:
>
> www.nersc.gov/nusers/services/training/classes/NUG/Jun04/NUG2004_yhe_hybrid.ppt
It's always hard to evaluate how real such claims are. But I will note
The question of OpenMP vs MPI has been around for a long time,
for example:
http://www.beowulf.org/archive/2001-March/002718.html
My general impression is that it is a waste of time to convert from pure
MPI to
a hybrid approach. For example:
www.sc2000.org/techpapr/papers/pap.pap214.pdf
On the
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] On Behalf Of Geoff Jacobs
>
> Using OpenMP per node is going to be much less portable and
> much more complex to implement. It's better to let your MPI
> library handle things the smart way.
At QLogic we're qui
[EMAIL PROTECTED] wrote:
Using OpenMP per node is going to be much less portable and much more
complex to implement. It's better to let your MPI library handle things
the smart way.
--
Geoffrey D. Jacobs
To have no errors
would be life without meaning
No struggle, no joy
__
My understanding is that on a multi-core machine, mpi communication routines
(MPI_SEND, etc) are implemented as memory copy instructions. Accordingly,
message passing within a multi-core node should be very fast compared to
your present cluster.
That said, It seems like all the performance benchm
Just for the record, I hate HTML-encoded e-mails.
[EMAIL PROTECTED] wrote:
> Hello,
>
>> This is perhap a naive question.
>>
>> 10 years before we started using the SP2, but we later changed to Intel
>> based linux beowulf in 2001. In our Unive
Hello,
>
>This is perhap a naive question.
>
>10 years before we started using the SP2, but we later changed to Intel
>based linux beowulf in 2001. In our University there are quite a no. of
>mpi-based parallel programs running in a 178 node d
[Please accept our apologies if you receive multiple copies of this emails]
**
(Selected high-quality papers from the PMEO-UCNS'08 workshop will
appear in a Special Issue of the journal - International Journal of Parallel,
Emergent, and
This is perhap a naive question.
10 years before we started using the SP2, but we later changed to Intel
based linux beowulf in 2001. In our University there are quite a no. of
mpi-based parallel programs running in a 178 node dual-Xeon PC cluster
that was installed 4 years ago.
We are now