On Tue, Aug 28, 2012 at 4:04 AM, Stan Hoeppner <s...@hardwarefreak.com> wrote:

> It's memory bandwidth of 20GB/s was many times higher than any x86
> server at that time as they all used a single P6 bus, with only 1GB/s
> bandwidth.  20GB/s is peanuts today given just two channels of DDR3-1333
> have just over 20GB/s, but back then, in the late 1990s, this was huge.
>  This CMP design also allowed assigning different amounts of memory to
> each of the hosts, with the firmware and custom crossbar chipset setting
> up the fences in the physical memory map.  Individual PCI buses and IO
> devices could be assigned to any of the partitioned servers.  In the
> first models, a console module had to be installed which included VGA,
> KB, mouse ports and each was controlled via a KVM switch.  Later models
> had a much more intelligent solution in the form of a single system
> controller.
>
> This also facilitated the ability to cluster multiple sets of two
> physical hosts (up to 4 clusters per server) within a single server
> using system memory as the cluster interconnect, with latency thousands
> of times lower and bandwidth thousands of times higher than the fastest
> network interconnects at that time, this became immensely popular with
> Unisys customers, many running multiple clustered MS SQL servers within
> one ES7000 mainframe.
>

wasn't the 20GB/s infiniband introduced in the late 90s / early 2k?
that should about measure up to what you're describing, but with tons
more scalability, no?


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAH_OBidTGH=Dof=4v7eXRrdfy2oOLUA=5BexHupCS4_=qto...@mail.gmail.com

Reply via email to