Re: [Beowulf] Infiniband: MPI and I/O?

2011-05-26 Thread Greg Keller
On 5/26/2011 4:23 PM, Mark Hahn wrote: >> Agreed. Just finished telling another vendor, "It's not high speed >> storage unless it has an IB/RDMA interface". They love that. Except > > what does RDMA have to do with anything? why would straight 10G ethernet > not qualify? I suspect you're re

Re: [Beowulf] Infiniband: MPI and I/O?

2011-05-26 Thread Mark Hahn
> Agreed. Just finished telling another vendor, "It's not high speed > storage unless it has an IB/RDMA interface". They love that. Except what does RDMA have to do with anything? why would straight 10G ethernet not qualify? I suspect you're really saying that you want an efficient interface

Re: [Beowulf] Infiniband: MPI and I/O?

2011-05-26 Thread Mark Hahn
>>> Wondering if anyone out there is doing both I/O to storage as well as >>> MPI over the same IB fabric. >>> >> >> I would say that is the norm. we certainly connect local storage (Lustre) >> to nodes via the same fabric as MPI. gigabit is completely >> inadequate for modern nodes, so the on

Re: [Beowulf] Infiniband: MPI and I/O?

2011-05-26 Thread Joe Landman
On 05/26/2011 03:29 PM, Greg Keller wrote: > Agreed. Just finished telling another vendor, "It's not high speed > storage unless it has an IB/RDMA interface". They love that. Except Heh ... love it! > for some really edge cases, I can't imagine running IO over GbE for > anything more than tr

Re: [Beowulf] Infiniband: MPI and I/O?

2011-05-26 Thread Greg Keller
Date: Thu, 26 May 2011 12:18:18 -0400 (EDT) > From: Mark Hahn > Subject: Re: [Beowulf] Infiniband: MPI and I/O? > To: Bill Wichser > Cc: Beowulf Mailing List > Message-ID: > > Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed > >> Wondering if anyone out there is doing both I/O to st

Re: [Beowulf] Infiniband: MPI and I/O?

2011-05-26 Thread Bill Wichser
Mark Hahn wrote: >> Wondering if anyone out there is doing both I/O to storage as well as >> MPI over the same IB fabric. >> > > I would say that is the norm. we certainly connect local storage > (Lustre) to nodes via the same fabric as MPI. gigabit is completely > inadequate for modern n

Re: [Beowulf] Infiniband: MPI and I/O?

2011-05-26 Thread Gilad Shainer
> Wondering if anyone out there is doing both I/O to storage as well as > MPI over the same IB fabric. Following along in the Mellanox User's > Guide, I see a section on how to implement the QOS for both MPI and my > lustre storage. I am curious though as to what might happen to the > perform

Re: [Beowulf] Infiniband: MPI and I/O?

2011-05-26 Thread Mark Hahn
> Wondering if anyone out there is doing both I/O to storage as well as > MPI over the same IB fabric. I would say that is the norm. we certainly connect local storage (Lustre) to nodes via the same fabric as MPI. gigabit is completely inadequate for modern nodes, so the only alternatives would

[Beowulf] Infiniband: MPI and I/O?

2011-05-26 Thread Bill Wichser
Wondering if anyone out there is doing both I/O to storage as well as MPI over the same IB fabric. Following along in the Mellanox User's Guide, I see a section on how to implement the QOS for both MPI and my lustre storage. I am curious though as to what might happen to the performance of th