nsequences before
you get wrong results and publish them.
-- Reuti
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
as filesystem? I only
found some older work about accessing an SQL database in this way.
-- Reuti
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
SCII data - would it work to put
it in a database instead of all these single files? How do you access the
files: by some kind of index, name, directory...?
-- Reuti
> I want to ask this general question: how does your shop deal with the general
> problem of
> small files in files
d can confirm the same.
Is the speed limited by the memory chips or the mainboard, or the population on
the mainboard (i.e. often the speed drops when all DIMM banks are used).
- -- Reuti
> For instance, the Nehalem-EP system typically gets around 12GB/s for
> triad and the Westmere-EX s
se these devices, just two thoughts:
For their GCM they list different limits:
http://www.redbooks.ibm.com/abstracts/tips0772.html
Do the machines you buy have onboard remote KVM which you could use instead of
the LCM/GCM?
-- Reuti
> http://www.redbooks.ibm.com/abstracts/tips0788.html
hey can reference
> with an environment variable (say, SCRATCH, or SGE_TMP or something
One you get for free in SGE and it can be accessed by $TMPDIR inside the
jobscript. A creation in a prolog is only necessary in case you need a second
one (maybe a global one in addition to the one on the
ngine's per-node
>> daemon to launch the binaries not ssh).
>
> why should a scheduler have daemons cluttering up compute nodes?
I think he refers to a daemon like sge_execd in SGE to receive jobs from the
qmaster, and there will be similar ones for other queuing systems runni
s together?
To combine them for what: HA, HPC or storage?
-- Reuti
>
> Regards
> Jonathan
> ___
> Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
> To change your subscription (digest mode or unsubscribe) visit
memory usage could be noted. There may be nodes with 15
idling cores out of 16, but the one remaining core is using already all memory
inside the node, leaving no room for other jobs.
-- Reuti
> The contents of this email are confidential and for the exclusive use of the
> intended recip
ybe I get it wrong, but I was checking these machines recently:
IBM's x3550 M4 goes up to 768 GB with 2 CPUs
http://public.dhe.ibm.com/common/ssi/ecm/en/xsd03131usen/XSD03131USEN.PDF
IBM's x3950 X5 goes up to 3 TB with their MAX-5 extension using 4 CPUs, so I
assume 1.5 TB with 2 CPUs co
th to try it (and regarding this setup
I was asking, whether anyone is using it and has experience - it's like running
OpenMP across nodes).
-- Reuti
> Since rendering is mostly an embarrassingly parallel workload, it is
> way easier to install & run something like Rocks cluster
non-registered
>> memory would be a more understandable difference in execution time.
>>
>> Several CPUs also slow down memory access if many DIMMs are installed, so
>> it seems to be better to use larger and hence fewer memory modules - which
>> might be more expens
memory would be a more
understandable difference in execution time.
Several CPUs also slow down memory access if many DIMMs are installed, so it
seems to be better to use larger and hence fewer memory modules - which might
be more expensive though.
-- Reuti
_
t suite to verify the installation.
...and any update/patch. Once you upgrade the kernel and/or libraries the test
suite has to be run again.
-- Reuti
> Believe me, that is not an expection, I
> know a number of chemistry codes which are used in practise and there is not
> test suite, or t
ECC discussion.
>
> ECC is simply a requirement IMHO, not a 'luxury thing' as some
> hardware engineers see it.
+1
-- Reuti
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
ngle machine out of many - has
anyone experience with it and uses it in a cluster?
http://www.scalemp.com/ http://www.kerrighed.org/wiki/index.php/Main_Page
-- Reuti
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To ch
f messages. MPI has some kind of functionality
> inside to address fault tolerance anyway.
If you are interested: there was a lot of discussion about FT in MPI3. There is
a mailing list:
http://lists.mpi-forum.org/mailman/listinfo.cgi/mpi3-ft
-- Reuti
_
rcial supported version from Univa.
> If Gridengine were closed source, it would have withered and died.
But the Univa version is now closed source (besides some "GE kernel" they put
online, but it's not the full product AFAIK).
To stay with open source there are the forks
red hardware.
http://www.necam.com/servers/ft/
-- Reuti
> Or, are there any other open-source s/w for this?
>
> Thanks.
> John Peter
>
>
> ___
> Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
&g
ion by PXE, control by
ipmitools, queue control,...
-- Reuti
> Regards
>
> Jonathan Aquilina
>
>
>
> On 13 Jun 2012, at 17:54, Joe Landman wrote:
>
>> Bright computing product. Uses their own cluster tools.
>> --
>> Sent from an android device. Pl
your job) to a halt if that's what you want"], sounds pedagogically
>> attractive ... ;-)
They use it in one cluster with Slurm I have access to. But it looks like you
are never thrown out again once you are in.
-- Reuti
> This doesn't apply to my case, since access to the systems
the command. Depends whether
they need the output, and/or any output file is created by the application on
its own anyway.
-- Reuti
> GREG
>
>>
>> Prentice
>> ___
>> Beowulf mailing list, Beowulf@beowulf.o
host and issue thereon screen to reconnect later. But they should already
use qlogin to go to the exechost.
-- Reuti
> Cheers,
> --
> Kilian
> ___
> Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
> To ch
egration of ssh).
For users just checking their jobs on a node I have a dedicated queue (where
they can login always, but h_cpu limited to 60 seconds, i.e. they can't abuse
it).
-- Reuti
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
fterwards. Do you have one queue per node?
For changing hostnames it's best to remove the nodes first from SGE and then
add them again with the new name.
-- Reuti
> to oldsaf_something. Currently the mendel_* queues are all jammed,
> because by IP number mendel
> is the
sh" is meant to be used in an empty SGE installation. It might
work in some cases even by overriding the previous setting, but if you have
e.g. mutal settings in two queues for subordination of the other queue, you
first have to remove this subordination in one queue before you can remove
Hi all,
on behalf of Jörg I forward this to the list, as his account seems to be
blocked to post to this list any longer.
-- Reuti
> #
> Dear all,
>
> as I cannot post directly to the list although I am subscribing to it, I have
> asked a friend of mine to post t
Am 14.06.2011 um 21:53 schrieb Michael Di Domenico:
> apparently
>
> awk '/pattern1/,/pattern2/' does what i need
Yep, or similar with sed and the same address type.
-- Reuti
> thanks
>
> On Tue, Jun 14, 2011 at 3:48 PM, Michael Di Domenico
> wrote:
>>
Am 21.04.2011 um 15:11 schrieb Vincent Diepeveen:
> Regrettably the link is not available anymore. Can you expand on it?
For me it's still working. You selected both lines?
--Reuti
> As they count the cloud computing in units of 1Ghz per cpunode hour,
> 1 billion computing
; fail the build.
>
> Does it still use aimk
Still aimk.
-- Reuti
> or has it finally gone over to autoconf, automake?
> As I recall aimk was really touchy the last time I built this (4
> years ago), with lots of futzing around to convince it to use library
> files it shou
platform specific tarball. Does it imply
to supply *.rpm in the future? It was always nice to just untar SGE and run it
as a normal user w/o any root privilege (yes, rpm2cpio could do). And it was
one tarball for all Linux variants. I would vote for staying with this.
- -- Reuti
> We build
t; want to create new queue.
> kindly help
I can't say for Maui, but in GridEngine you can define an RQS (resource quota
set) for users or groups to limit the wallclock time. I would expect, that this
is possible in Maui too.
-- Reuti
___
Beowulf m
ble.)
I thought the hardware business was the key point to buy Sun - to sell a
database including the serving hardware. What's left otherwise? But you are
right: they focus already on Intel machines only.
-- Reuti
> SGE doesn't fit their picture - a Sun OEM has stepped up to carr
ould work, but I wouldn't be surprised when it's slow between
two sites. All machines have a public TCP/IP address and are not in a private
subnet?
All machines have the same version of MPICH2?
-- Reuti
> When I do mpdboot -n x -f mpd.hosts. I get a message like
>
> mpdboot_Fe
independent from any other task of the array job.
-- Reuti
> and in the testR.r file, I was able to read the Task ID/node number and use
> it to parallelize my code. Which variable in the dsh command will let me read
> the node number?
>
> thanks!
>
> ___
Am 01.09.2010 um 12:15 schrieb Marian Marinov:
> On Wednesday 01 September 2010 11:47:29 Reuti wrote:
>> Am 01.09.2010 um 09:34 schrieb Christopher Samuel:
>>> -BEGIN PGP SIGNED MESSAGE-
>>> Hash: SHA1
>>>
>>> On 01/09/10 01:58, Reuti
Am 01.09.2010 um 09:34 schrieb Christopher Samuel:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 01/09/10 01:58, Reuti wrote:
>
>> With recent kernels also (kernel) processes in D state
>> count as running.
>
> I wouldn't say recent, tha
e?
With recent kernels also (kernel) processes in D state count as running. Hence
the load appears higher than the running processes would imply when only these
are added up.
-- Reuti
> On the other hand, if there truly were something wrong with a node[*]
> and I was to use a high load avea
the cluster, where one after the other job crashes due
to missing scratch space.
-- Reuti
> Any comments?
>
> --
> Rahul
> ___
> Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
> To change your subscriptio
serial jobs in a queue which gets
suspended when the main queue gets actually used; when these "background" jobs
are happy with the non-reserved resources)
-- Reuti
> Any comments or ideas are welcome.
>
> Thanks,
> Stuart Barkley
> --
> I've never been lost; I
ngs or clean up
> after themselves.
Yep. With GridEngine the $TMPDIR will be removed automatically, at least when
the user honors the variable. I disable ssh and rsh in my clusters except for
admin staff. Normal users can use an interactive job in SGE, which is limited
to a cpu time of 60 sec.,
f the node. But there might been chassis, where you can access the drive from
the front w/o hot-swap capability but with a big label: don't remove under
operation.
-- Reuti
>
> --
> Rahul
> ___
> Beowulf mailing list, Beowul
your files into e.g. 10 pieces each and generate 5 par/par2 files for
each of them. Then you need any 10 out of these 15 into total to be
good to recover the original file.
http://en.wikipedia.org/wiki/Parchive
-- Reuti
Thanks,
David Mathog
mat...@caltech.edu
Manager
Hi,
Am 17.02.2010 um 20:23 schrieb Tsz Kuen Ching:
Thanks for the reply, I have asked around and found out that there
are no firewall on the machine which blocks certain ports.
Does anyone else have an idea or answer?
On Sun, Feb 14, 2010 at 6:18 PM, Reuti
wrote:
Am 11.02.2010 um 19:43
rtain ports?
-- Reuti
Any Ideas or suggestions?
This is what happens:
u...@laptop> pvm
pvm> add slave-slave
add slave-slave
Terminated
u...@laptop> ...
The logs are as followed:
Laptop log
---
[t8004] 02/11 10:23:32 laptop (127.0.1.1:55884) LINUX 3.4.5
[t8004] 02/11 10:23:32 re
e
disks with other controllers due to this.
-- Reuti
* http://www.heise.de/foren/S-Re-Erfahrungen-Tandberg-RDX-QuikStor/
forum-7273/msg-16513128/read/
Cheers,
--
Kilian
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin
Comp
as is
also working for me (unset module and define it like below):
b) alias module='/cm/local/apps/environment-modules/3.2.6//Modules/
$MODULE_VERSION/bin/modulecmd bash $*'
or c) you could define a wrapper with a script and put it in /user/
local/bin or alike.
-- Reuti
I won
Hi,
Am 18.01.2010 um 10:13 schrieb Sangamesh B:
Hello all,
Thanks for your suggestions.
But we lost the access to the cluster because of the delay.
but the access to the service processor should still be there, and I
think Skylar referred to the ILOM interface.
-- Reuti
get the time back.
-- Reuti
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
using Global Arrays internally, and Open MPI is only
used for communication? Which version of GA is included with your
current NWChem?
-- Reuti
Other parts of the program behaving more as you expect it (again
parallel
between nodes, taken from one node):
14902 sassy 25 0 2161m 325m 1
d are invisible from the
outside).
==
If the administrative network is "Lights Out" management, I would
look for a switch with less performance laying around. If your
servers have it built-in, I would use it. If you have enough ports on
the switches, you can also connect it
ome kind of database
like SGE?
Is anyone putting the qmaster(s) in separate virtual machine(s) on
the file server for failover - I got this idea recently?
-- Reuti
Two service nodes which act as login/batch submission nodes.
PBSpro configured to fail over between them (ie one is the
f others.
To operate Sun VirtualBox w/o the graphical interface is possible,
and you can also direct the virtual console to any remote machine
using "rdesktop" as client on any platform you like.
-- Reuti
David Ramirez
Grad Student CIS
Prairie View A&M University, Texa
keygen -y which
is completely offline, as this will also need the passphrase to be
entered.
-- Reuti
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit
http://www
an
RQS (resource quota set) like:
limitqueues !login.q hosts {...@dualquad} to slots=8
-- Reuti
PS: I saw on Debian, that their "su" will not only set the user, but
also remove any imposed cpu time limit by making an su to oneself.
For the SGE queue to impose the h_cp
their hash
information about the original files. Maybe this could be implemented
as FUSE filesystem for easy handling (which will automatically split
the files, create the hashes and any number of par files you like).
-- Reuti
I would extend this to all data with
the hope that any problems
Am 20.08.2009 um 19:33 schrieb Mikhail Kuzminsky:
In message from Reuti (Thu, 20 Aug
2009 19:02:49 +0200):
Am 20.08.2009 um 16:29 schrieb Mikhail Kuzminsky:
In message from Reuti (Wed, 19 Aug
2009 21:07:19 +0200):
Maybe the disk id is different form the one recored in /etc/
fstab
Am 20.08.2009 um 16:29 schrieb Mikhail Kuzminsky:
In message from Reuti (Wed, 19 Aug
2009 21:07:19 +0200):
Maybe the disk id is different form the one recored in /etc/fstab.
What about using plain /dev/sda1 or alike, or mounting by volume
label?
At the moment of problem /etc/fstab, as
"append resume=..." either in lilo.conf or
grub's menu.lst
...
Waiting for device /dev/disk/by-id/scsi-SATA-WDC_WD-
part1 ... /* echo from udev.sh */
Maybe the disk id is different form the one recored in /etc/fstab.
What about using plain /dev/sda1 or alike, or mounting
]
I meant something like:
export PATH=~/pvm3/lib:$PATH
===
did you add the PATH specification in .bash_profile and/or .profile?
This will only work for an interactive login. AFAIR you must add it
to .bashrc which will be sourced for a non-interactive login, i.e.
during the pvm startup.
-- Reut
]
I meant something like:
export PATH=~/pvm3/lib:$PATH
===
did you add the PATH specification in .bash_profile and/or .profile?
This will only work for an interactive login. AFAIR you must add it
to .bashrc which will be sourced for a non-interactive login, i.e.
during the pvm startup.
-- Reut
e PATH specification in .bash_profile and/or .profile?
This will only work for an interactive login. AFAIR you must add it
to .bashrc which will be sourced for a non-interactive login, i.e.
during the pvm startup.
-- Reuti
Could you help me?
Thank you in advance
el compiler, when it was -pc64
for Portland one. And this already means, that for the same compiler
the flags change over time.
And even if the code compiles with another compiler: you have to
validate the results, when the code was specified to be correct with
a certain compiler other
nodes to include per line the short hostname, the FQDN,
the TCP-IP address besides the other hosts ssh keys (not the user's
one) as this would avoid any password or adding of the machines to
you personal ~/.ssh/known_hosts file. This won't work with your
workstation of course as it
.
I came across this:
http://jobscheduler.sourceforge.net/
I don't know, how it will operate in a cluster though, but maybe it's
worth to be checked.
-- Reuti
Thanks,
Sangamesh
___
Beowulf mailing list, Beowulf@beowulf.org
To c
cluster - and finally gave up.
It's successor Open MPI calls qrsh directly, when it discovers that
it's running under SGE. It just checks some environment variables.
-- Reuti
On the plus side, during the time that SGE was used, I have never
seen a process left behind from a job and
ken with this switch, as
the oted will daemonize. It will only be fixed in 1.3.2 AFAIK.
-- Reuti
A couple of years ago I did setup SGE with MPICH, and had to tinker
with
SGE's startup scripts to get everything to work correctly. Not that
difficult.
I could be wrong but I think at tha
Am 20.02.2009 um 16:57 schrieb Bogdan Costescu:
On Fri, 20 Feb 2009, Reuti wrote:
OTOH: Once I looked into Torque and found, that with
"nodes=2:ppn=2" I got just one node with 4 slots in total of
course. I don't know, whether it can still happen to get such a
distributio
ation rule of 2 or 4 respectively.
OTOH: Once I looked into Torque and found, that with "nodes=2:ppn=2"
I got just one node with 4 slots in total of course. I don't know,
whether it can still happen to get such a distribution.
-- Reuti
___
ad the job requirements for every job again and again
and store it.
-- Reuti
(PS: not to mention, that a for-loop/array-job is easier to handle
for the user)
if you're saying that the issue is not per-job overhead of
submission, but rather that jobs are too short, well, I think
t
ith the side effect to have to use "qstat" (for Torque)
and "showq" (for MAUI) to investigate the status of the jobs.
-- Reuti
We also don't make heavy use of the globus style WAN-scale capital
"G" grid computing as much of our workflows and pipelines are
many compilations to carry out.
Having a working backup would make this an issue of a few minutes.
You don't have any?
-- Reuti
Incidentally, it is my impression that such software as memtest86+ v.
2.11 and lshw find difficult to test such multisocket mainboards as
supermicro H8QC8
longer for enterprise
drives, up to 5 years. But you have to contact the vendor, not the
dealer, to get a replacement free of charge.
-- Reuti
Different internal filters, bearing seal or lubrication specifications
may also apply. I doubt that firmware is different.
--
Regards,
longer for enterprise
drives, up to 5 years. But you have to contact the manufacturer, not
the dealer, to get a replacement free of charge.
-- Reuti
Different internal filters, bearing seal or lubrication specifications
may also apply. I doubt that firmware is different.
--
.3
is used on other nodes. This is at my university so my only
concern is rights to install any dependencies.
Root access you need for both (to install it for all users). If you
are alone and want to run serial jobs only, then SGE can even be
installed under a user account.
-- Reuti
if it sticks there it will show
up on the console.
may I beg you to enter an issue at http://gridengine.sunsource.net/
of this?
-- Reuti
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) vi
ntel's MKL.
-- Reuti
Bad news: Are you really inverting the matrix? You should probably be
doing something like LU decomposition.
-- greg
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscrib
the node.
-- Reuti
On Tue, Oct 21, 2008 at 2:50 PM, Luis Alejandro Del Castillo Riley
<[EMAIL PROTECTED]> wrote:
hi
yes i have 10 nodes each ones with intel xeon quad core so basicaly
are 4 processors per each node
On Tue, Oct 21, 2008 at 7:53 AM, Reuti <[EMAIL PROTECTED]>
?
-- Reuti
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
already using any compression like RLE or LZW
inside? Do you want or must stay with tiff?
-- Reuti
as much as I can, as
fast as I can before I send it crossing network ... So, I am
wondering whether
anyone is familiar with any hardware based accelerator, which can
dramatically
improve the
ian03 or Molcas would run much slower, when all scratch
files would be written to NFS instead of a local disk. We even use
two or three stripped disks in the nodes.
As the disk is in the node because of our applications anyway, I also
put the OS there.
-- Reuti
3. yet more money on co
ermicro boxes all have an IPMI card
installed and SoL works quite ok.
I setup syslog-ng on the nodes to log to the headnode. There each
node will have a distinct file e.g. "/var/log/nodes/node42.messages".
If you are interested, I could post my configuration files for
headnode
feature is
supported by implementing a subordinated queue for long running jobs
in the long-queue. To start the short running jobs every day at a
fixed time, you could use a calender for the short-queue, which will
be enabled for a few hours every day and then drain a
t looking into something like IBM's GPFS and their
SAN switch and connect all nodes to this switch?
-- Reuti
I hope this helps...
I request your feedback...
Thanks,
Jitesh Dundas
Mobile- +91-9860925706
http://jiteshbdundas.blogspot.com
On 8/9/08, Carsten Aulbert <[EMAIL PROTE
rectory in such cases looking for T entries for the symbols in
question.
-- Reuti
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
ime on half the processors that were available.
how did you check this? With `top`? You have one queue with 8 slots
per machine?
-- Reuti
In addition, a 4 slot job started well after the 12 slot job has
ramped
up results in the same problem (both the 12 slot job and the four slot
job get assig
what they added/
changed in detail?
-- Reuti
We have reliable functional non-sm/non-ib based execution on
multiple machines now. New code drop coming, so we have to wait on
that. Once we have that, we'll be doing more testing.
Joe
--
Joseph Landman, Ph.D
Founder and CEO
Scal
Am 12.05.2008 um 18:01 schrieb Craig Tierney:
Reuti wrote:
Hiho,
Am 12.05.2008 um 15:14 schrieb Prentice Bisbal:
It's still an RFE in SGE to get any arbitrary combination of
resources, e.g. you need for one job 1 host with big I/O, 2
with huge memory and 3 "standard" ty
30 minutes
might end up in either of the two queues if a slot is free. Same
stands for memory requests: a job requesting 4 GB might end up in any
of the two memory limited queues, while a 12 GB request can only be
run in the 16 GB queue. You don't have to specify it, as SGE will
select
while in SGE you request resources and SGE will
select an appropriate queue for you.
-- Reuti
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
ng in long terms and
additional software in your cluster (maybe even parallel apps), I
would suggest to change the names of the machines.
-- Reuti
BTW: Torque has a list on its own at: http://www.clusterresources.com
Thanks,
Lance
--
Lance S. Jacobsen, Ph.D.
President
GoHypersonic Incorporat
none of them will manage the node usage - you have to assemble a node
list for every run by hand. What you might be looking for is a
resource manager like SGE, Torque, LSF, Condor,... und run parallel
jobs under their supervision.
-- Reuti
ANY help or advice would be greatly apprec
Hi,
Am 22.02.2008 um 09:23 schrieb Sangamesh B:
Dear Reuti & members of beowulf,
I need to execute a parallel job thru grid engine.
MPICH2 is installed with Process Manager:mpd.
Added a parallel environment MPICH2 into SGE:
$ qconf -sp MPICH2
pe_name MPICH2
slots
aptop based
upon it's MAC?
Q2: have you tried this with the PVM 3.4.4 RPMs (I think you
mentioned you were running 3.4.5)?
There are even newer patches:
http://www.csm.ornl.gov/~kohl/PVM/pvm3.4.5+9.tar.Z
-- Reuti
-b
I can literally snap the same box onto a wire, wait for it to g
e started without rsh/ssh
between the machines. You have to copy and paste some things from
here to there and back and can startup all daemons this way by hand
(page 30 in the PVM book). Maybe this works - just to narrow the cause.
-- Reuti
I suspect a race condition, probably caused b
ould be similar
like using a secondary interface.
-- Reuti
rgb
--
Robert G. BrownPhone(cell): 1-919-280-8443
Duke University Physics Dept, Box 90305
Durham, N.C. 27708-0305
Web: http://www.phy.duke.edu/~rgb
Book of Lilith Website: http://www.phy.duke.edu/~rgb/Lilith/L
S scheduler is dumb...
independent from any timing benefits: it will help to prevent users
to use more than the one granted slot. Some parallel libs are just
forking or using threads and don't need any qrsh to spawn a parallel
job. Nasty users can just use OpenMP with a thread count greater than
r root at all. Is it working for a
normal user? Then you can already run parallel programs.
-- Reuti
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
. Torque, and a
"showq" from Maui. At least for SGE I would suggest to stay with the
already built-in one.
-- Reuti
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beo
For me "xpvm" is a binary, for you it seems to be a script. Can you
try to start your self-compiled binary directly?
-- Reuti
--
Is there anyone who can help me with XPVM or..?
My Best Wishes
for You and Yours!
73! <http://www.kosmos.mk.ua>
__
1 - 100 of 133 matches
Mail list logo