On Nov 8, 2011, at 2:46 AM, Gilad Shainer wrote:
>> I just test things and go for the fastest. But if we do theoretic
>> math, SHMEM
>> is difficult to beat of course.
>> Google for measurements with shmem, not many out there.
>
> SHMEM within the node or between nodes?
shmem is the programmin
The latency numbers are more or less the same between the IB vendors on SDR,
DDR and QDR. Mellanox is the only vendor with FDR IB for now, and with PCIe 3.0
latency are below 1us (RDMA much below...). Question is what you are going to
use the system for - which apps.
Gilad
> -Original Mes
> I just test things and go for the fastest. But if we do theoretic math, SHMEM
> is difficult to beat of course.
> Google for measurements with shmem, not many out there.
SHMEM within the node or between nodes?
> Fact that so few standardized/rewrote their floating point software to gpu's,
> is
On Nov 8, 2011, at 12:44 AM, Joseph Han wrote:
> To further complicate issue, if latency is the key driving factor
> for older hardware, I think that the chips with the Infinipath/
> Pathscale lineage tend to have lower latencies than the Mellanox
> Inifinihost line.
>
> When in the DDR time
To further complicate issue, if latency is the key driving factor for older
hardware, I think that the chips with the Infinipath/Pathscale lineage tend to
have lower latencies than the Mellanox Inifinihost line.
When in the DDR time frame, I measured Infinipath ping-pong latencies 3-4x
better
Yeah well i'm no expert there what pci-x adds versus pci-e.
I'm on a budget here :)
I just test things and go for the fastest. But if we do theoretic
math, SHMEM is difficult to beat of course.
Google for measurements with shmem, not many out there.
Fact that so few standardized/rewrote their
RDMA read is a round trip operation and it is measured from host memory to host
memory. I doubt if Quadrics had half of it for round trip operations measured
from host memory to host memory. The PCI-X memory to card was around 0.7 by
itself (one way)
Gilad
-Original Message-
From:
hi Greg,
Very useful info! I already was wondering about the different timings
i see for infiniband,
but indeed it's the ConnectX that scores better in latency.
$289 on ebay but that's directly QDR then.
"ConnectX-2 Dual-Port VPI QDR Infiniband Mezzanine I/O Card for Dell
PowerEdge M1000e-Se
> Date: Mon, 07 Nov 2011 13:16:00 -0500
> From: Prentice Bisbal
> Subject: Re: [Beowulf] building Infiniband 4x cluster questions
> Cc: Beowulf Mailing List
> Message-ID:<4eb82060.3050...@ias.edu>
> Content-Type: text/plain; charset=ISO-8859-1
>
> Vincent,
>
> Don't forget that between SDR and QDR
It seems the latency of DDR infiniband to do a blocked read from
remote memory (RDMA) is between that of SDR and quadrics, with
quadrics being a lot faster.
http://www.google.nl/url?sa=t&rct=j&q=rdma%20latency%20ddr%
20infiniband&source=web&cd=9&ved=0CF8QFjAI&url=http%3A%2F%
2Fwww.cse.scitec
hi Eugen,
In Game Tree Search basically algorithmically it is a century further
than many other sciences
as the brilliant minds have been busy with it. For the brilliant guys
it was possible to make CASH with it.
In Math there is stil many challenges to design 1 kick butt
algorithm, but you
Vincent,
Don't forget that between SDR and QDR, there is DDR. If SDR is too
slow, and QDR is too expensive, DDR might be just right.
--
Goldilocks
On 11/07/2011 11:58 AM, Vincent Diepeveen wrote:
> hi Prentice,
>
> I had noticed the diff between SDR up to QDR,
> the SDR cards are affordable, t
Thanks for the very clear explanation Gilad!
You beated with just 2 lines entire wiki and lots of other homepages
with endless of chatter :)
On Nov 7, 2011, at 6:19 PM, Gilad Shainer wrote:
>> hi John,
>>
>> I had read already about subnet manager but i don't really
>> understand this,
>> ex
> They use the term "message oriented" with the description that the IB
> hardware takes care of segmentation and so forth, so that the application
> just says "send this" or "receive this" and the gory details are
> concealed. Then he distinguishes that from a TCP/IP stack, etc., where
> the sof
> I had noticed the diff between SDR up to QDR, the SDR cards are affordable,
> the QDR isn't.
>
> The SDR's are all $50-$75 on ebay now. The QDR's i didn't find cheap prices in
> that pricerange yet.
You can also find cards on www.colfaxdirect.com. You can also check with the
HPC Advisory Counc
> hi John,
>
> I had read already about subnet manager but i don't really understand this,
> except when it's only configuration tool.
>
> I assume it's not something that's critical in terms of bandwidth, it doesn't
> need nonstop bandwidth from the machine & switch is it?
The subnet management
On Nov 7, 2011, at 5:07 PM, Robert Horton wrote:
> On Mon, 2011-11-07 at 15:45 +0100, Vincent Diepeveen wrote:
>> What's the second one doing, is this just in case the switch fails,
>> a
>> kind of 'backup' port?
>>
>> In my naivity i had thought that both ports together formed the
>> bidirectio
On Nov 7, 2011, at 5:50 PM, Prentice Bisbal wrote:
DRIVERS:
Drivers for cards now. Are those all open source, or does it
require
payment? Is the source released of all those cards drivers, and do
they integrate into linux?
You should get everything you need from th
hi Prentice,
I had noticed the diff between SDR up to QDR,
the SDR cards are affordable, the QDR isn't.
The SDR's are all $50-$75 on ebay now. The QDR's i didn't find cheap
prices in that pricerange yet.
If i would want to build a network that's low latency and had a
budget of $800 or so a n
An interesting writeup..
A sort of tangential question about that writeup..
They use the term "message oriented" with the description that the IB
hardware takes care of segmentation and so forth, so that the application
just says "send this" or "receive this" and the gory details are
concealed.
>>> DRIVERS:
>>> Drivers for cards now. Are those all open source, or does it require
>>> payment? Is the source released of all those cards drivers, and do
>>> they integrate into linux?
>>> You should get everything you need from the Linux kernel and / or OFED.
>
> You can also find the drivers
On 11/06/2011 06:01 PM, Vincent Diepeveen wrote:
> hi,
>
> There is a lot of infiniband 4x stuff on ebay now.
Vincent,
Do you mean 4x, or QDR? They refer to different parts of the IB
architecture. 4x refers to the number of lanes for the data to travel
down and QDR refers to the data signalling
> > Do i need to connect both to the same switch?
> > So in short with infiniband you lose 2 ports of the switch to 1 card,
> > is that correct?
>
> You probably want to just connect one port to a switch and leave the other
> one unconnected to start with.
Correct. You only need to connect one p
On Mon, 2011-11-07 at 15:45 +0100, Vincent Diepeveen wrote:
> What's the second one doing, is this just in case the switch fails,
> a
> kind of 'backup' port?
>
> In my naivity i had thought that both ports together formed the
> bidirectional link to the switch.
> So i thought that 1 port was
> Anyway, on an Infiniband network the Subnet Manager assigns new hosts a
> LID (local identifier)
> and keeps track of routing tables between them.
> No SM, no new hosts join the network.
Regardless, make sure you're running opensm on an at least one of the
nodes connected to your IB switch. I di
>
> hi John,
>
> I had read already about subnet manager but i don't really understand
> this, except when it's only configuration tool.
>
> I assume it's not something that's critical in terms of bandwidth, it
> doesn't need nonstop bandwidth from the machine & switch is it?
>
It is critical.
Hi all,
a relatively easy to read introduction to IB is found at
http://members.infinibandta.org/kwspub/Intro_to_IB_for_End_Users.pdf
Cheerio,
Jan
--
Company Information
Vorstand/Board of Management: Dr. Bernd Finkbeiner, Dr. Roland Niemeier, Dr.
Arno Steitz, Dr. Ingrid Zech
Vorsitzend
On Nov 7, 2011, at 12:44 PM, Robert Horton wrote:
> Hi,
>
> Most of what I know about Infiniband came from the notes at
> http://www.hpcadvisorycouncil.com/events/switzerland_workshop/
> agenda.php
> (or John Hearns in his previous life!).
>
> On Mon, 2011-11-07 at 00:01 +0100, Vincent Diepeveen
hi John,
I had read already about subnet manager but i don't really understand
this, except when it's only configuration tool.
I assume it's not something that's critical in terms of bandwidth, it
doesn't need nonstop bandwidth from the machine & switch is it?
In case of a simple cluster con
Hi,
Most of what I know about Infiniband came from the notes at
http://www.hpcadvisorycouncil.com/events/switzerland_workshop/agenda.php
(or John Hearns in his previous life!).
On Mon, 2011-11-07 at 00:01 +0100, Vincent Diepeveen wrote:
> Do i need to connect both to the same switch?
> So in shor
On Mon, Nov 07, 2011 at 11:10:50AM +, John Hearns wrote:
> Vincent,
> I cannot answer all of your questions.
> I have a couple of answers:
>
> Regarding MPI, you will be looking for OpenMPI
>
> You will need a subnet manager running somewhere on the fabric.
> These can either run on the switc
Vincent,
I cannot answer all of your questions.
I have a couple of answers:
Regarding MPI, you will be looking for OpenMPI
You will need a subnet manager running somewhere on the fabric.
These can either run on the switch or on a host.
If you are buying this equipment from eBay I would imagine yo
32 matches
Mail list logo