RDMA read is a round trip operation and it is measured from host memory to host memory. I doubt if Quadrics had half of it for round trip operations measured from host memory to host memory. The PCI-X memory to card was around 0.7 by itself (one way)....
Gilad -----Original Message----- From: beowulf-boun...@beowulf.org [mailto:beowulf-boun...@beowulf.org] On Behalf Of Vincent Diepeveen Sent: Monday, November 07, 2011 12:33 PM To: Greg Keller Cc: beowulf@beowulf.org Subject: Re: [Beowulf] building Infiniband 4x cluster questions hi Greg, Very useful info! I already was wondering about the different timings i see for infiniband, but indeed it's the ConnectX that scores better in latency. $289 on ebay but that's directly QDR then. "ConnectX-2 Dual-Port VPI QDR Infiniband Mezzanine I/O Card for Dell PowerEdge M1000e-Series Blade Servers" This 1.91 microseconds for a RDMA read is for a connectx. Not bad for Infiniband. Only 50% slower in latency than quadrics which is pci-x of course. Yet now needed is a cheap price for 'em :) It seems indeed all the 'cheap' offers are the infinihost III DDR versions. Regards, Vincent On Nov 7, 2011, at 9:21 PM, Greg Keller wrote: > >> Date: Mon, 07 Nov 2011 13:16:00 -0500 >> From: Prentice Bisbal<prent...@ias.edu> >> Subject: Re: [Beowulf] building Infiniband 4x cluster questions >> Cc: Beowulf Mailing List<beowulf@beowulf.org> >> Message-ID:<4eb82060.3050...@ias.edu> >> Content-Type: text/plain; charset=ISO-8859-1 >> >> Vincent, >> >> Don't forget that between SDR and QDR, there is DDR. If SDR is too >> slow, and QDR is too expensive, DDR might be just right. > And for DDR a key thing is, when latency matters, "ConnectX" DDR is > much better than the earlier "Infinihost III" DDR cards. We have > 100's of each and the ConnectX make a large impact for some codes. > Although nearly antique now, we actually have plans for the ConnectX > cards in yet another round of updated systems. This is the 3rd > Generation system I have been able to re-use the cards in (Harperton, > Nehalem, and now Single Socket Sandy Bridge), which makes me very > happy. A great investment that will likely live until PCI-Gen3 slots > are the norm. > -- > Da Bears?! > >> -- >> Goldilocks >> >> >> On 11/07/2011 11:58 AM, Vincent Diepeveen wrote: >>>> hi Prentice, >>>> >>>> I had noticed the diff between SDR up to QDR, the SDR cards are >>>> affordable, the QDR isn't. >>>> >>>> The SDR's are all $50-$75 on ebay now. The QDR's i didn't find >>>> cheap prices in that pricerange yet. >>>> >>>> If i would want to build a network that's low latency and had a >>>> budget of $800 or so a node of course i would build a dolphin SCI >>>> network, as that's probably the fastest latency card sold for a >>>> $675 or so a piece. >>>> >>>> I do not really see a rival latency wise to Dolphin there. I bet >>>> most manufacturers selling clusters don't use it as they can make >>>> $100 more profit or so selling other networking stuff, and >>>> universities usually swallow that. >>>> >>>> So price total dominates the network. As it seems now infiniband >>>> 4x is not going to offer enough performance. >>>> The one-way pingpong latencies over a switch that i see of it, are >>>> not very convincing. I see remote writes to RAM are like nearly >>>> 10 microseconds for 4x infiniband and that card is the only one >>>> affordable. >>>> >>>> The old QM400's i have here are one-way pingpong 2.1 us or so, and >>>> QM500-B's are plentyful on the net (of course big disadvantage: >>>> needs >>>> pci-x), >>>> which are a 1.3 us or so there and have SHMEM. Not seeing a cheap >>>> switch for the QM500's though nor cables. >>>> >>>> You see price really dominates everything here. Small cheap nodes >>>> you cannot build if the port price, thanks to expensive network >>>> card, more than doubles. >>>> >>>> Power is not the real concern for now - if a factory already burns >>>> a couple of hundreds of megawatts, a small cluster somewhere on >>>> the attick eating a few kilowatts is not really a problem:) >>>> > > _______________________________________________ > Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin > Computing To change your subscription (digest mode or unsubscribe) > visit http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf _______________________________________________ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf