Geoff wrote:
..Interesting discussion deleted..
As a funny aside, I once knew a sysadmin who applied 24 hour timelimits
to all queues of all clusters he managed in order to force researchers
to think about checkpoints and smart restarts. I couldn't understand
why so many folks from his
On Jan 17, 2008 1:09 AM, Joe Landman <[EMAIL PROTECTED]> wrote:
> That said, and the point of this is that many HPC apps are rapidly
> becoming IO bound, as they need to move ginormous (meaning really large)
> amounts of data to and from disk, and MPI codes usually need to move
> data at the lowest
On Wed, 16 Jan 2008, Jeffrey B. Layton wrote:
Dear Jeff:
Joe Landman wrote:
Just some thoughts, hopefully not all that flammable (Jeff, what is that
rule? I am being asked, and I don't have an answer ...)
Rule: (Theorem)
Anything that appears to be flame-bait, actually is.
Corollary:
Not m
Jeffrey B. Layton wrote:
Joe Landman wrote:
Just some thoughts, hopefully not all that flammable (Jeff, what is
that rule? I am being asked, and I don't have an answer ...)
Rule: (Theorem)
Anything that appears to be flame-bait, actually is.
Ahhh
I wonder if we can say "flame-bait is
Joe Landman wrote:
Just some thoughts, hopefully not all that flammable (Jeff, what is
that rule? I am being asked, and I don't have an answer ...)
Rule: (Theorem)
Anything that appears to be flame-bait, actually is.
Corollary:
Not matter what you say, no matter how much experience
you have,
> >- With multi-core processors, to get the best performance you want to
> > assign a process to a core.
>
> Excuse my ignorance, please, but can someone tell me how to do that
> on Linux (2.6 kernels would be fine)?
Use an MPI which does this for you?
Two examples are InfiniPath MPI and OpenM
No experience running COAMPS but for WRF I think your proposed system
will work well. Memory bandwidth will play a role in preformance but
file IO will also. Infiniband _is_ worth the cost/effort.
I'd strongly recomment Luster/Gluster or GFS over NFS for this.
gerry
Anand Vaidya wrote:
We a
Meng Kuan wrote:
We performed some benchmark testing with linpack and bonnie++ on the
VM and on the physical host. For para-virtualized VMs, the linpack
performance is on par with the physical host. However, for bonnie++
tests, para-virtualized VMs fell way behind physical host's
performance. In
Cool. Thanks.
Mike
At 09:43 AM 1/16/2008, Shannon V. Davidson wrote:
Michael H. Frese wrote:
At 08:31 AM 1/16/2008, Jeffrey B. Layton wrote:
- With multi-core processors, to get the best performance you want to
assign a process to a core.
Excuse my ignorance, please, but can someone te
Michael H. Frese wrote:
At 08:31 AM 1/16/2008, Jeffrey B. Layton wrote:
- With multi-core processors, to get the best performance you want to
assign a process to a core.
Excuse my ignorance, please, but can someone tell me how to do that on
Linux (2.6 kernels would be fine)?
sched_setaff
We are in the process of acquiring a new cluster for running weather modelling
software viz. NRL COAMPS, WRF and NHM (Japan)
We are currently running COAMPS on a Cluster of 50+ GigE and dual socket DC
Opterons, NFS, CentOS4, RAM size=1GB/core, the performance seems to be
limited by I/O (network
Try the man pages for the taskset command on Linux 2.6 machine. There are
also system calls sched_setaffinity() and sched_getaffinity()
Regards,
Bill.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Michael H. Frese
Sent: January 16, 2008 11:33 AM
T
At 08:31 AM 1/16/2008, Jeffrey B. Layton wrote:
- With multi-core processors, to get the best performance you want to
assign a process to a core.
Excuse my ignorance, please, but can someone tell me how to do that
on Linux (2.6 kernels would be fine)?
The kernel scheduler -- as opposed to
Jeffrey B. Layton wrote:
Anyway, my 2 cents (and probably my last since this topic falls under
Landman's Rule: of flammability).
uh... er ... uh huh ? Hey ... the coffee hasn't quite kicked in
yet, and we have been pounding out DragonFly code (and it is working ...
woo hoo! Jobs submit
Douglas Eadline wrote:
I get the desire for fault tolerance etc. and I like the idea
of migration. It is just that many HPC people have spent
careers getting applications/middleware as close to the bare
metal as possible. The whole VM concept seems orthogonal to
this goal. I'm curious how people
Ashley Pittman wrote:
On Wed, 2008-01-16 at 09:18 -0500, Douglas Eadline wrote:
I get the desire for fault tolerance etc. and I like the idea
of migration. It is just that many HPC people have spent
careers getting applications/middleware as close to the bare
metal as possible. The whole VM con
On Wed, 16 Jan 2008, Douglas Eadline wrote:
I get the desire for fault tolerance etc. and I like the idea
of migration. It is just that many HPC people have spent
careers getting applications/middleware as close to the bare
metal as possible. The whole VM concept seems orthogonal to
this goal.
On Wed, 2008-01-16 at 09:18 -0500, Douglas Eadline wrote:
> I get the desire for fault tolerance etc. and I like the idea
> of migration. It is just that many HPC people have spent
> careers getting applications/middleware as close to the bare
> metal as possible. The whole VM concept seems ortho
On Jan 16, 2008 9:19 PM, Douglas Eadline <[EMAIL PROTECTED]> wrote:
> While your project looks interesting and I like the idea of
> VMs, however I have not seen a good answer to the fact that VM = layers
> and in HPC layers = latency. Any thoughts? Also, is it open source?
We performed some benchm
I get the desire for fault tolerance etc. and I like the idea
of migration. It is just that many HPC people have spent
careers getting applications/middleware as close to the bare
metal as possible. The whole VM concept seems orthogonal to
this goal. I'm curious how people are approaching this
pr
I certainly cannot speak for the VMC project, but application migration
and fault tolerance (the primary benefits other than easy access to
heterogeneus environments from VMs) are always going to result in a
peformance hit of some kind. You cannot expect to do more things with no
overhe
While your project looks interesting and I like the idea of
VMs, however I have not seen a good answer to the fact that VM = layers
and in HPC layers = latency. Any thoughts? Also, is it open source?
--
Doug
> Greetings,
>
> I would like to announce the availability of VMC (Virtual Machine
> Con
22 matches
Mail list logo