[Beowulf] Create cluster : questions

2006-09-06 Thread Maxence Dunnewind
Hi.i'm a user of the Ubuntu Linux OS, and also a packager for this OS.As you may know , packaging can be take a lot of time, mainly during building process.I would create a public cluster for help packagers. All Ubuntu users can accept we use their computers on the cluster. But the cluster system M

Re: [Beowulf] Optimal BIOS settings for Tyan K8SRE

2006-09-06 Thread stephen mulcahy
Hi Bruce, Do you have any idea what the performance impact from enabling scrubbing is on your systems? did you do any before/after benchmarking? Thanks, -stephen Bruce Allen wrote: > On Sun, 3 Sep 2006, Mark Hahn wrote: > >>> ECC Features >>> ECCEnabled >>> ECC Scrub Re

[Beowulf] RE: Q: Experiences with high memory (64GB+) nodes?

2006-09-06 Thread Brian Dobbins
[NOTE: I apologise for anyone receiving this twice; my other e-mail address is behaving strangely.]Hi everyone,  I'm scouring through the web and reading the archives trying to trackdown experiences and information from people with large-memory commodity nodes.  A lab I know is looking at getting s

Re: [Beowulf] Optimal BIOS settings for Tyan K8SRE

2006-09-06 Thread stephen mulcahy
Hi, Sure. We have a head node which acts as an NFS server for the diskless compute nodes in the cluster. On the head node, I had to use the following BIOS settings to ensure the OS could see the full 4GB of physical memory installed, Main Installed O/S Linux Memory Hole

Re: [Beowulf] Optimal BIOS settings for Tyan K8SRE

2006-09-06 Thread stephen mulcahy
Hi Mark, Thanks for your mail. See my comments below. Mark Hahn wrote: >> 270s) which are being used primarily for Oceanographic modelling (MPICH2 >> running on Debian/Linux 2.6 kernel). > > on gigabit? Yes, on gigabit (is this an uhu moment? :) Someone has suggested I should be looking at Ope

[Beowulf] NCSU and FORTRAN

2006-09-06 Thread Wallace Pitts
Robert; My wife is a biomathematics student at NCSU. She is currently working on a Markov chain simulation using MATLAB. The goal is to use some sort of search routine to find a set of transition matrix parameters that minimize the sum of squares. The problem simulated say 10,000 molecules

Re: [Beowulf] cluster softwares supporting parallel CFD computing

2006-09-06 Thread Eric W. Biederman
"Daniel Kidger" <[EMAIL PROTECTED]> writes: > Bogdan, > > Parallel applications with lots of MPI traffic should run fine on a cluster > with large jiffies - just as long as the interconnect you use doesn't need to > take any interrupts. (Interrupts add hugely to the latency figure) I know a lot o

Re: [Beowulf] cluster softwares supporting parallel CFD computing

2006-09-06 Thread Ivan Paganini
We, at LFC, have some specific requests, and our cluster is not large (much less powerfull than yours, a bunch of old PIII), so we use debian stable as OS, MPICH 1.2.7 as Message Passing Libraries, Portland Group Compilers and Intel Compilers, and netlib lapack and scalapack. We also use ACML with

Re: [Beowulf] Optimal BIOS settings for Tyan K8SRE

2006-09-06 Thread Ivan Paganini
Can you post what were the tweaking that you had undergone to access the 4GB?  Thank you.On 8/31/06, stephen mulcahy < [EMAIL PROTECTED]> wrote: Hi,I'm maintaining a 20-node cluster of Tyan K8SREs (4GB RAM, dual Opteron270s) which are being used primarily for Oceanographic modelling (MPICH2running

[Beowulf] Q: Experiences with high memory (64GB+) nodes?

2006-09-06 Thread Brian Dobbins
Hi everyone, I'm scouring through the web and reading the archives trying to track down experiences and information from people with large-memory commodity nodes. A lab I know is looking at getting several 64GB or even 128GB Opteron nodes, but unfortunately I've not been focused on hardware mu

RE: [Beowulf] cluster softwares supporting parallel CFD computing

2006-09-06 Thread Daniel Kidger
Bogdan, Parallel applications with lots of MPI traffic should run fine on a cluster with large jiffies - just as long as the interconnect you use doesn't need to take any interrupts. (Interrupts add hugely to the latency figure) Daniel -Original Message- From: [EMAIL PROTECTED] [mailto:

Re: [Beowulf] cluster softwares supporting parallel CFD computing

2006-09-06 Thread Bogdan Costescu
On Mon, 4 Sep 2006, Mark Hahn wrote: > 100 Hz scheduler ticks might make sense Please don't mention this to beginners (which is the impression that the original message left me) - if they care enough to search the information available on the subject they will either make their own mind or com