> on the server. When I run netperf on the clients I can see that they
> are either connecting to eth0 or eth1 using iftop. However, I still
> can't get more than 1gbps.
Should be, some are connecting to eth0 and some to eth1 .
___
Beowulf mailing list,
> Are you certain the MAC spoofing is working? I'd check the ARP tables on
> your systems, and maybe sniff the wire to see if the right ARP
> broadcasts are going out.
Route shows that only bond0 has a routing table. The arp tables on the
clients i run netperf on show either the mac address of eth
On Tue, Feb 17, 2009 at 8:14 PM, Mike Davis wrote:
>
> On Mon, 16 Feb 2009, Tiago Marques wrote:
>>
>> I must ask, doesn't anybody on this list run like 16 cores on two nodes
>>> well, for a code and job that completes like in a week?
>>>
>> For GROMACS do a google search on GROMACS parallel sc
Andrew Piskorski wrote:
On Fri, Feb 20, 2009 at 04:10:33PM +, John Hearns wrote:
It is a tad lame to repeat articles from HPCwire here, but I can't help it.
New. Shiny. http://www.supermicro.com/products/nfo/2UTwin2.cfm
So it gets to use all larger 80 mm case fans, while stuffing 4
dual-so
On Fri, Feb 20, 2009 at 04:10:33PM +, John Hearns wrote:
> It is a tad lame to repeat articles from HPCwire here, but I can't help it.
> New. Shiny. http://www.supermicro.com/products/nfo/2UTwin2.cfm
So it gets to use all larger 80 mm case fans, while stuffing 4
dual-socket nodes into one 2u c
2009/2/20 Joe Landman :
> Bogdan Costescu wrote:
> This said, we tend to suggest our customers look for OpenMPI compatibility
> first. HP MPI works pretty well also, though it (and other binary only
> stacks) tend to be hard linked against (older) particular Infiniband stacks
> ... makes support