At 07:05 PM 1/29/2009, Greg Lindahl wrote:
On Thu, Jan 29, 2009 at 07:22:10PM -0500, Mark Hahn wrote:
> I'll bite: suppose I run large MPI jobs (say, 1k rank)
> and have 8 cores/node and 1 nic/node. under what circumstances
> would a node be primarily worried about message rate, rather than latency?
Well, let's say that you're doing a stencil computation on a 3D grid,
diagonals included. Then each cycle each core needs to send to 26
neighbors, and then receive from 26 neighbors. Even if you have
fat-ish nodes (~ 8 cores) and a clever layout of the cores onto the 3D
grid, that's a lot of off-node messages in a row. And that's message
rate.
Johnn Adams said "Facts are stubborn things," and there just aren't
enough of them in your example to determine whether bandwidth or
latency dominates communication time. I have a 3-d code that does
quite a lot of that. Assuming each processor has a 100 x 100 x 100
grid, the communication to the six face neighbors of 10,000 elements
may dominate. If the basic grid dimension is 10 instead of 100, the
communication to the 12 edge neighbors and the 8 corner neighbors may
take three times as those 6 messages.
The critical thing from the hardware is the size of the message that
requires twice the latency time to transmit. For GigE at 30
microseconds, and this is 30 kilobits. For 64 bit floating point
that's about 2000 numbers. For messages smaller than that the
latency dominates, while for messages longer than that the bandwidth does.
The latency dominates in time-stepping finite-difference codes where
you need to do a time step in a few seconds. For steady state
finite-difference codes where you can spend an hour on a single
solution, the bandwidth determines how big a problem you can do.
As the list is wont to say, YMMV.
Mike
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf