[EMAIL PROTECTED] wrote:
Rich,
I know that Platform (LSF), Altair (PBS), and Cluster Resources (MOAB) have
solutions that do this. I started to look at them at my previous job. Not too
bad in general. I also know that Joe Landman at Scalable Informatics also
has a web interface (SWICE?) that he
Hi Jeff, Rich, and Beo-world:
[EMAIL PROTECTED] wrote:
> Rich,
>
> I absolutely agree with you! I've been thinking about this for a long time but
> I don't have any answers for you :) I think there are a number of people
> thinking about this same problem. My best suggestion is to use the web to
thanks for all the replies first of all,
i don't know the exact scyld distribution. However, i am running mpich 1.2.5.
When i run my program (stripped down to a mere MPI_INIT(...) call) and test it
with valgrind i get something like :
==21799== Use of uninitialised value of size 8
==21799==
Ideally ISVs won't link with a.DLL for Compute Cluster Server. Instead
they'll call a web service running on the job scheduler node. That web
service could be offered by a Windows cluster or a Linux/UNIX cluster.
This simplifies cluster integration work for ISVs.
We hope the way people do this is
On Thu, Apr 12, 2007 at 08:59:04AM -0700, Rich Altmaier wrote:
>
> This means, for Linux to have a position as the backend
> compute cluster, we must have this one button job launch
> capability. A Windows library must be available to
> the ISV, to provide a job submission API to the batch
> sch
Rich,
I absolutely agree with you! I've been thinking about this for a long time but
I don't have any answers for you :) I think there are a number of people
thinking about this same problem. My best suggestion is to use the web to
launch and monitor jobs. Everyone know how to use a browser, so j
Matt,
Which version of Scyld is that? I just tried it out on 30cz1 and CW4 and
neither had an issue, however
that little mpihello.c might not have been representative of what your
application does... Did you engage
Support?
Michael
Matt Funk wrote:
Hi,
i hope this is the right mailing list
Were I to constuct a sentence by writing a gopher, which follows the links
of the wiki articles for first, the Theorem which asserts that Proofs are
Programs, and second, the wiki articles regarding divers symbolic
mathematics computation packages such as Cayley, Maple, and MAXSYMA, and
then, purs
On Thu, 12 Apr 2007, Rusty Lusk wrote:
Sorry for the belated participation in this subthread of this most excellent
thread. There is another paper on the relationship between MPI and PVM,
written by Bill Gropp and me. You can find it at
http://www.mcs.anl.gov/~gropp/bib/papers/2002/mpiandpv
Hi Rich,
Have a look at the following links
http://drmaa.org/wiki/
http://www.ogf.org/gf/group_info/view.php?group=drmaa-wg
http://www.ogf.org/gf/group_info/view.php?group=saga-rg
Cheers
f.
Rich Altmaier wrote:
Here is a proactive suggestion for keeping open source
a
On Thu, 12 Apr 2007, Peter St. John wrote:
I propose we bifurcate into two threads (both of which may be done!).
1. Thesis: 64 bit good. We are all agreed now, case closed, IMO.
:-)
2. Thesis: no group of human beings will ever directly author source code
(meant to compile together) in exc
Hi Christian,
Sorry for this very delayed answer.
At 03:16 27.03.2007, Christian Bell wrote:
I can't type, 482 was indeed a typo. But still, I wouldn't look at
the absolute numbers "as is" since the single-node base case has
different performance. Since 1x2x1 is our only common base case and
Here is a proactive suggestion for keeping open source
ahead of Microsoft CCS:
1. I think CCS will appeal to small shops with no prior cluster
and no admin capability beyond a part time windows person.
2. such customers are the volume seats for a range of desktop
CAD/CAE tools.
3. Such ISVs
On 4/12/07, Ashley Pittman <[EMAIL PROTECTED]> wrote:
> On Mon, 2007-04-09 at 11:30 -0600, Matt Funk wrote:
> > The reason i want to run on 32 processor though, is that it takes (on
> > 32 procs) several hours till my program crashes. Also, i would like to
> > be able to keep the conditions under w
Gerry,
Yeah. I think Korn is the "preferred" shell at AT&T too. But better for you
to dumb down your tcsh (say) to ksh (say) for the make, than for the
sysadmin to.
Unfortunately the infinite flexibilty of the open environment (my choice of
vi, or emacs!!) leads to uncontrollable variety, which fr
Sorry for the belated participation in this subthread of this most
excellent thread. There is another paper on the relationship between
MPI and PVM, written by Bill Gropp and me. You can find it at
http://www.mcs.anl.gov/~gropp/bib/papers/2002/mpiandpvm.pdf
We wrote it because we felt the
Peter,
To some extent I can understand the need to specialize shells,
especially for optimization of batch processing directives. However,
someone reasonably competent with a particular shell should be able to
set up environment variables and such in a manner likely to make a build
work.
A
Gerry,
I just wanted to note that if it's difficult to recompile using a different
shell, perhaps because of elaborate build scripts in tcsh or something, then
your admin is right, he can't help you debug the make until you switch
shells. With the burgeoning complexity of the business specializati
I propose we bifurcate into two threads (both of which may be done!).
1. Thesis: 64 bit good. We are all agreed now, case closed, IMO.
2. Thesis: no group of human beings will ever directly author source code
(meant to compile together) in excess of 4GB.
I think we agree with RGB that 2 is irre
Indeed it is rather amusing to think of an example with very large
text size. First I thought it's very hard. But let me see.
Suppose you have a rather complicated function of 50 variables
which takes about 500kB. Then full second derivatives (say,
using Maple or 2500 people) will take about 500k
Amrik Singh wrote:
Recently we started noticing very high (70-90%) wait states on the file
servers when compute nodes. We have tried to optimize the NFS through
increasing the number of daemons and the rsize and wsize but to no avail.
PS: All the nodes are running SuSE 10.0 and servers are r
I'm trying to get WRF-NMM running on a new IBM p575 and having some
issues. Anyone on here gone down that path and willing to offer a
little advice?
Please reply off-list. Note: AiX spoken on this system. Leads to all
sorts of interesting things when the sysadmin states he can't tell me
wh
I don't know, but one thing that caught my eye is the error at line 47, when
your makefile seems to have only 45 (nonblank) lines. If I miscounted (which
is easy), then the error could have raised when dropping off EOF without
meeting some grammatical expectation. That made me wonder what "RANLIB
On Wednesday 11 April 2007 09:02:03 pm [EMAIL PROTECTED] wrote:
> Thanks for your replies everyone. I feel like I am understanding things
> a little better. One thing that I realized but not sure if this makes a
> difference is that I have installed an openmpi package and an mpich
> package so co
On Mon, 2007-04-09 at 11:30 -0600, Matt Funk wrote:
> The reason i want to run on 32 processor though, is that it takes (on
> 32 procs) several hours till my program crashes. Also, i would like to
> be able to keep the conditions under which it crashes intact as much
> as possible (i.e. run on 32 p
On Thu, 12 Apr 2007, Mark Hahn wrote:
2.) If you install the 32 bit version of an os, say linux, instead of
the 64 bit version and then run 32 bit apps, do you get the speedup?
no. in any case, I doubt there's any reason to install 32b linux,
since a 64b kernel _can_ support 32b processes (wh
On Wed, 11 Apr 2007, Richard Walsh wrote:
Hear hear! For self-adapting softare you *can't* distinguish
instructions from data. That may sound over-specialized but I invite
you to consider DNA and what it does: instruct enzymes to modify DNA.
An awful lot comes out of that process. So I don't thi
1.) Why is a 64 bit cpu faster? I had assumed the main benefit was the
memory that could be addressed, obviously a bad assumption.
being able to address more memory is indeed critical for some codes.
certainly not all; in fact, the larger pointers hurt some codes.
64b mode also enables a lot mo
On Sun, Apr 08, 2007 at 07:30:41PM +0200, Toon Moene wrote:
> Jon Forrest wrote:
>
> >One thing I've noticed about 64-bit computing in general
> >is that it's being oversold. The **only** reason
> >for running in 64-bit mode is if you need the additional
> >address space.
For AMD64 (including EM6
29 matches
Mail list logo