just need to workout how to build large fat-trees with ethernet.

With 100G we get by with mlag for the spines (upto 640 hosts on 10g).

with our 40G/10G network, we are using ospf/mpls/vpls to give a static
fat-tree...  30k lines of config applied to ~20 switches.  Nasty.

On Thu, Jun 2, 2016 at 11:57 AM, Greg Lindahl <lind...@pbm.com> wrote:

> On Thu, Jun 02, 2016 at 07:48:10AM +0800, Stu Midgley wrote:
> > That's it.  As I said, I haven't used it since the early 00's.
> >
> > With 100Gb becoming common it might be time for these sort of MPI's to
> come
> > back.
>
> You could do better than these old interfaces, too... modern ethernet
> chips have multiple send/receive queues, so you could hack up an
> interface which looks a lot more like InfiniPath's one-queue-per-core
> model -- the OpenIB code for InfiniPath also has code that maps M
> cores:N queues if M>N.
>
> -- greg
>
>


-- 
Dr Stuart Midgley
sdm...@sdm900.com
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to