Because my target application is easy to distribute, and also tries to optimize it's own operating environment (by fiddling with it's own parameters), I'm thinking about using MPI for the case that a node wants to specify a remote node to do a job (e.g., an underutilized node, or one that has common statistics loaded locally) and OpenMP for when the app doesn't know what specific other node should do a job. There's a bottleneck in my app where I'd just have to have two calls, one for spawning in each mode. I was thinking that OpenMP might be smarter about taking advantage of nearby cores on a CPU, while my app might be smarter about taking advantage of the current environment of a CPU, or maybe could learn to be.
But I'm a long way off still, this is all hypothetical. Peter On 11/28/07, amjad ali <[EMAIL PROTECTED]> wrote: > > Hello, > > Because today the clusters with multicore nodes are quite common and the > cores within a node share memory. > > Which Implementations of MPI (no matter commercial or free), make > automatic and efficient use of shared memory for message passing within a > node. (means which MPI librarries auomatically communicate over shared > memory instead of interconnect on the same node). > > regards, > Ali. > > _______________________________________________ > Beowulf mailing list, Beowulf@beowulf.org > To change your subscription (digest mode or unsubscribe) visit > http://www.beowulf.org/mailman/listinfo/beowulf > >
_______________________________________________ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf