Hi Peter,

Peter St. John wrote:
I was wondering if Peter K's remark generalized: if there are multiple ports, the node has a choice, which may be application dependent. One port for MPI and the other to a disk farm seems clear, but it still isn't obvious to me that a star topology with few long cables to a huge switch is always better than many short cables with more ports per node but no switches. (I myself don't have any feel for how much bottleneck a switch is, just topologically it seems scary).

FNN-like topologies make sense only when:
1) the price of the host port is low compared to a switch port.
2) the host has enough IO capacity to drive that many ports.
3) the cables are reasonably cheap/small/light.

Today, only Fast and Gigabit Ethernet can validate all three.

For everything else, the switch port is the same price or lower than the NIC price (that will change when the NIC goes for "free" on the motherboard, but we are not there yet). PCI Express is a bottleneck and cables are a major pain.

Bulky cables are actually the best argument for a switch topology, since the biggest advantage of a switch is to not have cables inside, just wires on PCB.

Patrick
--
Patrick Geoffray
Myricom, Inc.
http://www.myri.com
_______________________________________________
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to