Re: [Beowulf] itanium vs. x86-64

2009-02-14 Thread richard . walsh
>- Original Message - >From: "Michael Brown" >To: "Beowulf Mailing List" >Sent: Tuesday, February 10, 2009 3:52:04 PM GMT -05:00 US/Canada Eastern >Subject: Re: [Beowulf] itanium vs. x86-64 > >node. Hopefully, with the Nehalem and Tukwila sharing the same socket we >might be ab

[Beowulf] Re: Problems scaling performance to more than one node, GbE

2009-02-14 Thread David Mathog
Tiago Marques > I've been trying to get the best performance on a small cluster we have here > at University of Aveiro, Portugal, but I've not been enable to get most > software to scale to more than one node. > The problem with this setup is that even calculations that take more than 15 > da

Re: [Beowulf] Problems scaling performance to more than one node, GbE

2009-02-14 Thread John Hearns
2009/2/13 Tiago Marques : > > As for software, I'm using Gentoo Linux, ICC/IFC/GotoBLAS, tried scalapack > with no benefit, OpenMPI and Torque, running in x86-64 mode. > Pallas/PMB, or there are test programs included with OpenMPI which measure performance. ___

Re: [Beowulf] Problems scaling performance to more than one node, GbE

2009-02-14 Thread Nicholas M Glykos
Hi Tiago, > I tried with Gromacs ... Concerning your MD tests, would it be worthwhile to check also NAMD's parallel efficiency ? Based on past experience, I would suggest that you try both the UDP- & TCP-based versions (don't forget to use the +giga and possibly the +atm flags). Also, kee