Re: [Beowulf] bonding.txt really confusing, why don't I get higher aggregate bandwidth from multiple TCP connections from multiple gigabit clients with balance-alb bonding on a server?

2009-02-21 Thread Sabuj Pattanayek
> on the server. When I run netperf on the clients I can see that they > are either connecting to eth0 or eth1 using iftop. However, I still > can't get more than 1gbps. Should be, some are connecting to eth0 and some to eth1 . ___ Beowulf mailing list,

Re: [Beowulf] bonding.txt really confusing, why don't I get higher aggregate bandwidth from multiple TCP connections from multiple gigabit clients with balance-alb bonding on a server?

2009-02-21 Thread Sabuj Pattanayek
> Are you certain the MAC spoofing is working? I'd check the ARP tables on > your systems, and maybe sniff the wire to see if the right ARP > broadcasts are going out. Route shows that only bond0 has a routing table. The arp tables on the clients i run netperf on show either the mac address of eth

Re: [Beowulf] Re: Problems scaling performance to more than one node, GbE

2009-02-21 Thread Tiago Marques
On Tue, Feb 17, 2009 at 8:14 PM, Mike Davis wrote: > > On Mon, 16 Feb 2009, Tiago Marques wrote: >> >> I must ask, doesn't anybody on this list run like 16 cores on two nodes >>> well, for a code and job that completes like in a week? >>> >> For GROMACS do a google search on GROMACS parallel sc

Re: [Beowulf] Supermicro 2U

2009-02-21 Thread Gerry Creager
Andrew Piskorski wrote: On Fri, Feb 20, 2009 at 04:10:33PM +, John Hearns wrote: It is a tad lame to repeat articles from HPCwire here, but I can't help it. New. Shiny. http://www.supermicro.com/products/nfo/2UTwin2.cfm So it gets to use all larger 80 mm case fans, while stuffing 4 dual-so

Re: [Beowulf] Supermicro 2U

2009-02-21 Thread Andrew Piskorski
On Fri, Feb 20, 2009 at 04:10:33PM +, John Hearns wrote: > It is a tad lame to repeat articles from HPCwire here, but I can't help it. > New. Shiny. http://www.supermicro.com/products/nfo/2UTwin2.cfm So it gets to use all larger 80 mm case fans, while stuffing 4 dual-socket nodes into one 2u c

Re: [Beowulf] Please help to setup Beowulf

2009-02-21 Thread John Hearns
2009/2/20 Joe Landman : > Bogdan Costescu wrote: > This said, we tend to suggest our customers look for OpenMPI compatibility > first. HP MPI works pretty well also, though it (and other binary only > stacks) tend to be hard linked against (older) particular Infiniband stacks > ... makes support