On Wed, Jun 21, 2017 at 5:39 AM, John Hearns <[email protected]> wrote:
> For a long time the 'sweet spot' for HPC has been the dual socket Xeons.

True, but why? I guess because there wasn't many other options, and in
the first days of multicore CPUs, it was the only way to have decent
local parallelism, even with QPI (and its ancestors) being a
bottleneck. And also to have enough PCIe lanes (40 lanes ought to
enough for anyone, right?)

But now, with 20+ core CPUs, does it still really make sense to have
dual socket systems everywhere, with NUMA effects all over the place
that typical users are blissfully unaware of?

Seems to me like this is a smart design move from AMD, and that
single-socket systems, with 20+ core CPUs and 128 PCIe lanes could
make a very cool base for many HPC systems. Of course, that's just on
paper for now, proper benchmarking will be required.

Cheers,
-- 
Kilian
_______________________________________________
Beowulf mailing list, [email protected] sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to