There is the strange part. How to utilize such a vast cpu? Storage should be the back end, unless the use is an api. In this case a gargantuan cpu sits in back, or so it seems.
On Sat, Jun 13, 2020, 9:41 PM Chris Samuel <ch...@csamuel.org> wrote: > On 13/6/20 7:58 pm, Fischer, Jeremy wrote: > > > It’s my understanding that NeoCortex is going to have a petabyte or two > > of NVME disk sitting in front of it with some HPE hardware and then > > it’ll utilize the queues and lustre file system on Bridges2 as its front > > end. > > There's more information here: > > > https://www.psc.edu/3206-nsf-funds-neocortex-a-groundbreaking-ai-supercomputer-at-psc-2 > > # Neocortex will use the HPE Superdome Flex, an extremely powerful, > # user-friendly front-end high-performance computing (HPC) solution > # for the Cerebras CS-1 servers. This will enable flexible pre- and > # post-processing of data flowing in and out of the attached WSEs, > # preventing bottlenecks and taking full advantage of the WSE > # capability. HPE Superdome Flex will be robustly provisioned with > # 24 terabytes of memory, 205 terabytes of high-performance flash > # storage, 32 powerful Intel Xeon CPUs, and 24 network interface > # cards for 1.2 terabits per second of data bandwidth to each > # Cerebras CS-1. > > The way it reads both of these CS-1's will sit behind that single Flex. > > All the best, > Chris > -- > Chris Samuel : http://www.csamuel.org/ : Berkeley, CA, USA > _______________________________________________ > Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing > To change your subscription (digest mode or unsubscribe) visit > https://beowulf.org/cgi-bin/mailman/listinfo/beowulf >
_______________________________________________ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit https://beowulf.org/cgi-bin/mailman/listinfo/beowulf