On 8/28/2012 12:43 PM, shawn wilson wrote:
> On Tue, Aug 28, 2012 at 4:04 AM, Stan Hoeppner <s...@hardwarefreak.com> wrote:
> 
>> It's memory bandwidth of 20GB/s was many times higher than any x86
>> server at that time as they all used a single P6 bus, with only 1GB/s
>> bandwidth.  20GB/s is peanuts today given just two channels of DDR3-1333
>> have just over 20GB/s, but back then, in the late 1990s, this was huge.
>>  This CMP design also allowed assigning different amounts of memory to
>> each of the hosts, with the firmware and custom crossbar chipset setting
>> up the fences in the physical memory map.  Individual PCI buses and IO
>> devices could be assigned to any of the partitioned servers.  In the
>> first models, a console module had to be installed which included VGA,
>> KB, mouse ports and each was controlled via a KVM switch.  Later models
>> had a much more intelligent solution in the form of a single system
>> controller.
>>
>> This also facilitated the ability to cluster multiple sets of two
>> physical hosts (up to 4 clusters per server) within a single server
>> using system memory as the cluster interconnect, with latency thousands
>> of times lower and bandwidth thousands of times higher than the fastest
>> network interconnects at that time, this became immensely popular with
>> Unisys customers, many running multiple clustered MS SQL servers within
>> one ES7000 mainframe.
>>
> 
> wasn't the 20GB/s infiniband introduced in the late 90s / early 2k?

The fist Infiniband hardware hit the market in the early 2000s.  But its
data rate was only 2Gbps, or ~200MB/s, one way.  This is Infiniband 1x.
 That's 100 times less bandwidth than the memory system in the 32-way
ES7000.  And the latency is over 100 times higher.  Now take into
account that with memory based networking, all that is required to send
a packet is a write to a memory location, and to receive it, all that is
required is to read that memory location.

> that should about measure up to what you're describing, but with tons
> more scalability, no?

Absolutely not.  4x QDR Infiniband is the most popular type in wide use
today.  Its one way bandwidth is "only" 4GB/s, and its one way latency
is still many tens of times greater than the in-memory networking in the
ES7000 we're discussing.  Infiniband 12x QDR is currently used for
switch-switch backbone links and node links in some HPC clusters, and
has a one way data rate of 96Gbps, or 12GB/s, only 60% of the ES7000
memory bandwidth, while its latency is still some 10 times greater.

The absolute "fastest" type of Infiniband is currently 12x EDR, with a
one way data rate of 300Gbps, or 37.5GB/s.  And just as some of the
Internet's fastest backbone links at 10 Terabits/sec have far more
bandwidth that the memory subsystem of a high end parallel server of
over a decade ago, so does today's fastest implementation of Infiniband.
 This shouldn't be surprising.  Just as it shouldn't be surprising than
any AMD socket AM3 or better system has more memory bandwidth than the
decade+ old high end server we're discussing.  Technology doesn't sit still.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/503d275d.1020...@hardwarefreak.com

Reply via email to