Re: [Beowulf] glusterfs and openmpi/mpich problems

2009-01-12 Thread Marian Marinov
On Thursday 08 January 2009 20:38:40 Joe Landman wrote: > Gerry Creager wrote: > > We've been working with gluster of late, on our high throughput cluster > > (126 nodes, gigabit connected). We did some tweaking recently, and now, > > my test code, an instance of WRF on 128 cores, just sorta dies.

Re: [Beowulf] glusterfs and openmpi/mpich problems

2009-01-08 Thread Joe Landman
Gerry Creager wrote: We've been working with gluster of late, on our high throughput cluster (126 nodes, gigabit connected). We did some tweaking recently, and now, my test code, an instance of WRF on 128 cores, just sorta dies. More specifically, it takes 19 minutes to write the first 403MB

[Beowulf] glusterfs and openmpi/mpich problems

2009-01-08 Thread Gerry Creager
We've been working with gluster of late, on our high throughput cluster (126 nodes, gigabit connected). We did some tweaking recently, and now, my test code, an instance of WRF on 128 cores, just sorta dies. More specifically, it takes 19 minutes to write the first 403MB file to disk, while v