On 5/26/2011 4:23 PM, Mark Hahn wrote:
>> Agreed. Just finished telling another vendor, "It's not high speed
>> storage unless it has an IB/RDMA interface". They love that. Except
>
> what does RDMA have to do with anything? why would straight 10G ethernet
> not qualify? I suspect you're re
> Agreed. Just finished telling another vendor, "It's not high speed
> storage unless it has an IB/RDMA interface". They love that. Except
what does RDMA have to do with anything? why would straight 10G ethernet
not qualify? I suspect you're really saying that you want an efficient
interface
>>> Wondering if anyone out there is doing both I/O to storage as well as
>>> MPI over the same IB fabric.
>>>
>>
>> I would say that is the norm. we certainly connect local storage (Lustre)
>> to nodes via the same fabric as MPI. gigabit is completely
>> inadequate for modern nodes, so the on
On 05/26/2011 03:29 PM, Greg Keller wrote:
> Agreed. Just finished telling another vendor, "It's not high speed
> storage unless it has an IB/RDMA interface". They love that. Except
Heh ... love it!
> for some really edge cases, I can't imagine running IO over GbE for
> anything more than tr
Date: Thu, 26 May 2011 12:18:18 -0400 (EDT)
> From: Mark Hahn
> Subject: Re: [Beowulf] Infiniband: MPI and I/O?
> To: Bill Wichser
> Cc: Beowulf Mailing List
> Message-ID:
>
> Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
>
>> Wondering if anyone out there is doing both I/O to st
Mark Hahn wrote:
>> Wondering if anyone out there is doing both I/O to storage as well as
>> MPI over the same IB fabric.
>>
>
> I would say that is the norm. we certainly connect local storage
> (Lustre) to nodes via the same fabric as MPI. gigabit is completely
> inadequate for modern n
> Wondering if anyone out there is doing both I/O to storage as well as
> MPI over the same IB fabric. Following along in the Mellanox User's
> Guide, I see a section on how to implement the QOS for both MPI and my
> lustre storage. I am curious though as to what might happen to the
> perform
> Wondering if anyone out there is doing both I/O to storage as well as
> MPI over the same IB fabric.
I would say that is the norm. we certainly connect local storage
(Lustre) to nodes via the same fabric as MPI. gigabit is completely
inadequate for modern nodes, so the only alternatives would
Wondering if anyone out there is doing both I/O to storage as well as
MPI over the same IB fabric. Following along in the Mellanox User's
Guide, I see a section on how to implement the QOS for both MPI and my
lustre storage. I am curious though as to what might happen to the
performance of th