I have been coding on the Cell for just over a month now. Nothing serious, just getting subroutines to run on a single SPU. This turns out to be very easy. Now I am looking at parallel programming between the SPUs and this seems much more difficult. The API, as far as I have read it, does not have nice routines for message passing between the SPUs, you have to set up your own memory transfers or address remote memory directly using the MFC.
Running independent codes on each SPU is pretty easy but only addresses the class of ideally parallel problems. I would like to encourage a discussion on techniques of how to split and parallelize a problem with this "dual-layer" parallel architecture(MPI and MFC). It seems to me a good starting point is to divide a problem, say a CFD, into larger sections at the MPI layer and then a smaller division of the subset on the individual Cell processor. This poses the issue of message passing between disparate SPUs. Any input on the characteristics of how this may work and perform? Tim Wilcox Terra Soft Solutions 505-239-4600 _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf