Hi.i'm a user of the Ubuntu Linux OS, and also a packager for this OS.As you may know , packaging can be take a lot of time, mainly during building process.I would create a public cluster for help packagers. All Ubuntu users can accept we use their computers on the cluster. But the cluster system M
Hi Bruce,
Do you have any idea what the performance impact from enabling scrubbing
is on your systems? did you do any before/after benchmarking?
Thanks,
-stephen
Bruce Allen wrote:
> On Sun, 3 Sep 2006, Mark Hahn wrote:
>
>>> ECC Features
>>> ECCEnabled
>>> ECC Scrub Re
[NOTE: I apologise for anyone receiving this twice; my other e-mail address is behaving strangely.]Hi everyone, I'm scouring through the web and reading the archives trying to trackdown experiences and information from people with large-memory commodity
nodes. A lab I know is looking at getting s
Hi,
Sure. We have a head node which acts as an NFS server for the diskless
compute nodes in the cluster. On the head node, I had to use the
following BIOS settings to ensure the OS could see the full 4GB of
physical memory installed,
Main
Installed O/S Linux
Memory Hole
Hi Mark,
Thanks for your mail.
See my comments below.
Mark Hahn wrote:
>> 270s) which are being used primarily for Oceanographic modelling (MPICH2
>> running on Debian/Linux 2.6 kernel).
>
> on gigabit?
Yes, on gigabit (is this an uhu moment? :) Someone has suggested I
should be looking at Ope
Robert;
My wife is a biomathematics student at NCSU. She is currently working
on a Markov chain simulation using MATLAB. The goal is to use some sort
of search routine to find a set of transition matrix parameters that
minimize the sum of squares. The problem simulated say 10,000 molecules
"Daniel Kidger" <[EMAIL PROTECTED]> writes:
> Bogdan,
>
> Parallel applications with lots of MPI traffic should run fine on a cluster
> with large jiffies - just as long as the interconnect you use doesn't need to
> take any interrupts. (Interrupts add hugely to the latency figure)
I know a lot o
We, at LFC, have some specific requests, and our cluster is not large (much less powerfull than yours, a bunch of old PIII), so we use debian stable as OS, MPICH 1.2.7 as Message Passing Libraries, Portland Group Compilers and Intel Compilers, and netlib lapack and scalapack. We also use ACML with
Can you post what were the tweaking that you had undergone to access the 4GB? Thank you.On 8/31/06, stephen mulcahy <
[EMAIL PROTECTED]> wrote:
Hi,I'm maintaining a 20-node cluster of Tyan K8SREs (4GB RAM, dual Opteron270s) which are being used primarily for Oceanographic modelling (MPICH2running
Hi everyone,
I'm scouring through the web and reading the archives trying to track
down experiences and information from people with large-memory commodity
nodes. A lab I know is looking at getting several 64GB or even 128GB
Opteron nodes, but unfortunately I've not been focused on hardware mu
Bogdan,
Parallel applications with lots of MPI traffic should run fine on a cluster
with large jiffies - just as long as the interconnect you use doesn't need to
take any interrupts. (Interrupts add hugely to the latency figure)
Daniel
-Original Message-
From: [EMAIL PROTECTED] [mailto:
On Mon, 4 Sep 2006, Mark Hahn wrote:
> 100 Hz scheduler ticks might make sense
Please don't mention this to beginners (which is the impression that
the original message left me) - if they care enough to search the
information available on the subject they will either make their own
mind or com
12 matches
Mail list logo