Try the man pages for the taskset command on Linux 2.6 machine. There are
also system calls sched_setaffinity() and sched_getaffinity()
Regards,
Bill.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of Michael H. Frese
Sent: January 16, 2008 11:33 AM
T
cluster. I
have no problem as such that many user might log on to the cluster
simultaneously. Suppose that I am free to use cluster dedicatedly for my single
parallel application.
1) Do I really need a cluster scheduler installed on the cluster? Should I use
scheduler?
[Bill Bryce] If you are
In response to the question:
> PBS, Cluster Resources, and LSF all have some type of web portal where
you > can do some of these things. Of course they are commercial and
sometimes
> not always the most flexible.
Do they expose some sort of API as well?
There is the DRMAA api that SGE, PBS and
Bill.
-Original Message-
From: Toon Knapen [mailto:[EMAIL PROTECTED]
Sent: Wednesday, April 25, 2007 3:43 AM
To: Bill Bryce
Cc: Tim Cutts; beowulf@beowulf.org
Subject: Re: [Beowulf] scheduler policy design
Bill Bryce wrote:
>
> 2) use the LSF resource reservation mechani
To solve the problem below that toon describes where the scheduler
believes 4 jobs can co-exist on a single node but they cannot because
they are I/O (disk) bound jobs and will thrash the system.
There are several ways in LSF, here are two...
1) create a new resource for the type of job cal
ng it...'somewhere safe' is not really an ideal solution
especially when the users password changes and the job scheduler does not
pickup the change.
Regards,
Bill Bryce
Product Manager
Platform Open Cluster Stack
-Original Message-
From: John Vert [mailto:[EMAIL PROTECTED]
estriction with other MPI's such as MPICH2 for
Windows from Argonne.
Regards,
Bill.
-Original Message-
From: John Vert [mailto:[EMAIL PROTECTED]
Sent: Tuesday, April 03, 2007 3:19 PM
To: Robert G. Brown; Bill Bryce
Cc: beowulf@beowulf.org
Subject: RE: [Beowulf] Win64 Clusters!!!
Regarding the 'borgification' of MPI on Windowsthere is an opening
for them to do this. MPI did not define how you start the tasks when
you launch a parallel jobit left that up to the MPI implementation.
Now most MPIs use something reasonable, like say ssh - or starting a mpd
ring to launc
rom: Eric Shook [mailto:[EMAIL PROTECTED]
Sent: Tuesday, December 12, 2006 12:28 PM
To: Bill Bryce
Cc: Michael Will; Buccaneer for Hire.; beowulf@beowulf.org
Subject: Re: [Beowulf] SATA II - PXE+NFS - diskless compute nodes
Hi Bill,
I will try to email them and let everyone know what they h
Hi Eric,
You may want to send the Perceus guys an email and ask them how hard it
is to replace cAos Linux with RHEL or CentOS. I don't believe it should
be that hard for them to dowe modified Warewulf to install on top of
a stock Rocks cluster effectively turning a Rocks cluster into a
Warewu
us.
Bill.
-Original Message-
From: M J Harvey [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 04, 2006 12:23 PM
To: Bill Bryce
Cc: beowulf@beowulf.org
Subject: Re: [Beowulf] Intel MPI 2.0 mpdboot and large clusters, slow
tostart up, sometimes not at all
Hello,
> We are going thro
Hi Mark,
We are going through a similar experience at one of our customer sites.
They are trying to run Intel MPI on more than 1,000 nodes. Are you
experiencing problems starting the MPD ring? We noticed it takes a
really long time especially when the node count is large. It also just
doesn'
12 matches
Mail list logo