Thanks for the link;  my ultimate interest, though, is in an architecture that 
could scale to multiple machines rather than multiple cores with shared memory 
on a single machine.  Has there been any interest and/or progress in making DPH 
run on multiple machines and other NUMA architectures?

Cheers,
Greg


On Apr 19, 2010, at 3:33 PM, Sebastian Sylvan wrote:

> 
> 
> On Mon, Apr 19, 2010 at 11:03 PM, Gregory Crosswhite 
> <[email protected]> wrote:
> Hey everyone,
> 
> Has anyone done any work with bulk synchronous parallel computing in Haskell? 
>  The idea behind the model is that you divide your computation into a series 
> of computation and communication phases, and it has recently occurred to me 
> that this might be an ideal setup for parallelizing a pure language like 
> Haskell because you can think of it as alternating between a stage that 
> independently applies a bunch of functions to a bunch of independent chunks 
> of data and a stage that applies a big function to all of the chunks that 
> recombines them into new chunks for the next parallel phase, so that all 
> stages are conceptually pure even if eventually the second stage is turned 
> into something involving communication and hence side-effectful under the 
> hood.
> 
> Experiences?  Thoughts?
> 
> 
> You may want to check out NDP, e.g. here: 
> http://www.haskell.org/haskellwiki/GHC/Data_Parallel_Haskell
> 
> It's at a higher level of abstraction, in a way. You don't need to worry 
> about the dicing up and recombining, the compiler takes care of it for you. 
> You just write things in terms of parallel arrays (which can be small, e.g. 2 
> element wide) and the compiler will fuse/flatten these together into big bulk 
> parallel computations with communication between them. 
> 
> -- 
> Sebastian Sylvan

_______________________________________________
Haskell-Cafe mailing list
[email protected]
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to