Date: Thu, 26 May 2011 12:18:18 -0400 (EDT)
> From: Mark Hahn<h...@mcmaster.ca>
> Subject: Re: [Beowulf] Infiniband: MPI and I/O?
> To: Bill Wichser<b...@princeton.edu>
> Cc: Beowulf Mailing List<beowulf@beowulf.org>
> Message-ID:
>       <pine.lnx.4.64.1105261210510.7...@coffee.psychology.mcmaster.ca>
> Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
>
>> Wondering if anyone out there is doing both I/O to storage as well as
>> MPI over the same IB fabric.
> I would say that is the norm.  we certainly connect local storage
> (Lustre) to nodes via the same fabric as MPI.  gigabit is completely
> inadequate for modern nodes, so the only alternatives would be 10G
> or a secondary IB fabric, both quite expensive propositions, no?
>
> I suppose if your cluster does nothing but IO-light serial/EP jobs,
> you might think differently.
>
Agreed.  Just finished telling another vendor, "It's not high speed 
storage unless it has an IB/RDMA interface".   They love that.  Except 
for some really edge cases, I can't imagine running IO over GbE for 
anything more than trivial IO loads.


I am Curious if anyone is doing IO over IB to SRP targets or some 
similar "Block Device" approach.  The Integration into the filesystem by 
Lustre/GPFS and others may be the best way to go, but we are not 100% 
convinced yet.  Any stories to share?

Cheers!
Greg
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to