Nothing was broken in the previous InfiniBand adapters. The previous
generation, with higher MPI latency still beat other solutions that
shows lower latency, due to the fully offload architecture.

look, that's just not true. I've got a cheap, low-end cluster which uses plain old myri2g and mx, and has for ~3 years. 3 us latency.
IB was more like 6-8 us (at the mpi level, of course) three years ago,
and with the exception of your new promised adapters, is still not faster than 2g-mx...

We gain experience from each generation and implement it in the next
generations, and this is the outcome.

thanks, I needed my daily dose of marketing-speak.  unfortunately,
the question remains unanswered. gradual improvement does not explain a 3x improvement.

There are no special requirements for achieving this MPI latency, and
we are very happy to provide low latency without changing the offload
concept of our architecture.

OK, so what's unique about your offload concept?  it's obviously not the case
that you're the first to do offload.

also, just to be perfectly explicit, this is 1.2 us
inter-node, right?  not something crazy like two 8-core boxes
with only two of 16 hops inter-box?

2 nodes connected via InfiniBand - regular setting. There is no
dependency on the
numbers of cores, as we don't need to CPU to drive the interconnect.

sniping at infinipath aside, thanks for the confirmation.
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to