I will slightly blow my own trumpet here. I think a design which has high bandwidth uplinks and half speed links to the compute nodes is a good idea. I would love some pointers to studies on bandwith utilisation on large scale codes. Are there really any codes which will use 200Gbps across many nodes simultaneously?
On Sun, 21 Oct 2018 at 18:57, John Hearns <hear...@googlemail.com> wrote: > A comment from Brock Palane please? > > https://www.nextplatform.com/2018/10/18/great-lakes-super-to-remove-islands-of-compute/ > > I did a bid for a new HPC cluster at UCL in the UK, using FDR adapters and > 100Gbps switches, making the same arguments abotu cutting down on switch > counts but still having a non-blocking network (at the time Mellanox were > promoting FDR by selling it at 40Gbps prices). > > But in this article if you have 1x switch in a rack and use all 80 ports > (with splitters) - there are not many ports left for uplinks! > I imagine this is 2x 200Gbps switches, with 20 ports of each switch > equipped with port splitters and the other 20 ports as uplinks. > > > >
_______________________________________________ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf