i is an example). So, fuzziness all
around. Life.
--
Nicholas M. Glykos, Department of Molecular Biology
and Genetics, Democritus University of Thrace, University Campus,
Dragana, 68100 Alexandroupolis, Greece, Tel/Fax (office) +302551030620,
Ext.77620, Tel (lab) +3
We went from systemimager, that still worked fine on CentOS 6, to a
kickstart automated install on CentOS 7 after checking several options.
The problems I encountered with systemimager, is that the 2.6.x kernel
used in them boel and suplied kernels, does not load the 3.x kernel
modules and includi
o running my Molecular Quantum Dynamics I am.
m.
--
mark somers
tel: +31715274437
mail: m.som...@chem.leidenuniv.nl
web: http://theorchem.leidenuniv.nl/people/somers
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To change
http://www.cs.rpi.edu/~szymansk/OOF90/bugs.html
--
mark somers
tel: +31715274437
mail: m.som...@chem.leidenuniv.nl
web: http://theorchem.leidenuniv.nl/people/somers
___
Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing
To chan
list for building a cluster
but it is rough and partly in dutch. That, together with some example
config files, allows me to setup a cluster within three days from scratch.
Please mail me if you are interested.
m.
--
mark somers
tel: +31715274437
mail: m.som...@chem.leidenuniv.nl
web: http://the
n your code on 500 processors.
Sign up for a free trial account. www.sabalcore.com 877-492-8027 ext. 11
--
--
>>>>>>>>>>>>>>>>>>>>>> George M. Sigut <<<<<<<<<<<<<<<<<<<<
*hi! I have problem with openssh, if a create key in master to comunicate
with nodo, I create key public and create file authorized_keys, I send file
authorized_keys of master to nodo:*
*#scp -r /root/.ssh clisterX:/root/.ssh*
*later I try connect to nodo but, I have that put key :( *
*when I cre
Hi folks,
I have access to a bunch (around 20) machines in our lab, each one with a
particular configuration, usually some combination of Core i5/i7 and
4GB/8GB/16GB RAM (the "heterogeneous" part), connected by a 24 ports Cisco
switch with reasonable backplane. They're end user machines, but with
> Enough Vincent - as of now you are now moderated.
And so it goes ... ;-)
--
Nicholas M. Glykos, Department of Molecular Biology
and Genetics, Democritus University of Thrace, University Campus,
Dragana, 68100 Alexandroupolis, Greece, Tel/Fax (office) +302551030
> cat /dev/zero | sudo tee /dev/sda
Talking about scissors and X-ray generators ...
--
Nicholas M. Glykos, Department of Molecular Biology
and Genetics, Democritus University of Thrace, University Campus,
Dragana, 68100 Alexandroupolis, Greece, Tel/Fax (off
> I term this article "fun with sudo, or how to drive down I95 at 65mph
> while holding scissors transporting your x-ray device"
:-
--
Nicholas M. Glykos, Department of Molecular Biology
and Genetics, Democritus University of Thrace, University C
toy cluster, and not the central cluster of their
university.
--
Nicholas M. Glykos, Department of Molecular Biology
and Genetics, Democritus University of Thrace, University Campus,
Dragana, 68100 Alexandroupolis, Greece, Tel/Fax (office) +302551030620,
Ext
the
admins explain to the other users that their jobs died because they gave
you root access ?
--
Nicholas M. Glykos, Department of Molecular Biology
and Genetics, Democritus University of Thrace, University Campus,
Dragana, 68100 Alexandroupolis, Greece, Tel/Fax (off
access to the general users of a shared computing
facility.
My twocents,
Nicholas
--
Nicholas M. Glykos, Department of Molecular Biology
and Genetics, Democritus University of Thrace, University Campus,
Dragana, 68100 Alexandroupolis, Greece, Tel/Fax (office) +302551
http://www.sciencemag.org/content/338/6103/26.full?rss=1
"Japan's K computer made headlines in June 2011 as the world's fastest
supercomputer and again last November when it became the first
computer to top 10 petaflops—or 10 quadrillion calculations per
second—solving a benchmark mathematical pro
make a putative
mini-research-cluster solution. Apologies for not offering to do it
myself, I wouldn't know where to start from ...
--
Nicholas M. Glykos, Department of Molecular Biology
and Genetics, Democritus University of Thrace, University Campus,
Dragana, 68100 Al
Hi Joe,
> I've found that filters help.
You are killing my daily digests.
> If you are afflicted with Microsoft ...
What is 'Microsoft' ?
:-)
All the best (and apologies to the list for the email traffic),
Nicholas
--
Nicholas M. Glykos, Department
re you all know---
significantly reduced the signal-to-noise ratio. Can we get back to
normal, please ?
Thanks,
Nicholas
--
Nicholas M. Glykos, Department of Molecular Biology
and Genetics, Democritus University of Thrace, University Campus,
Dragana, 68100 Alexandroupolis, Greec
raints is a balancing act between
crippling creativity (and making power users mad) and avoiding equipment
misuse, but clearly, there are limits in the freedom of use (for example,
you wouldn't add all cluster users to your sudo list).
My twocents,
Nicholas
--
Dr Ni
s (allocated through slurm). The
principal idea ["you are welcome to be bring your allocated node (and,
thus, your job) to a halt if that's what you want"], sounds pedagogically
attractive ... ;-)
Nicholas
--
Dr Nicholas M. Glykos, Department of Molecular Biology
being productive, no
matter what the assigned topic is. Unfortunately (and as usually happens
with all aphorisms), the inverse statement is also true .-)
My twocents,
Nicholas
--
Dr Nicholas M. Glykos, Department of Molecular Biology
and Genetics, Democritus University of Thra
> http://arxiv.org/abs/cond-mat/0506786, sorry, nobody ever gets to talk
> about their thesis...).
:-)) (sorry, sorry, I couldn't resist the temptation).
--
Dr Nicholas M. Glykos, Department of Molecular Biology
and Genetics, Democritus University of Thrace,
* Jeremy Baker (jello...@gmail.com) [090922 21:17]:
>
>Can someone help me to better understand how these patents interact with
>the open source bazaar method of programing, Linux, the law, GIS systems
>with meta data that is essentially 3-D access for a user's avatar, etc? I
>am h
his is through a pam module for slurm that only allows ssh access
to those nodes that a user has active jobs on).
Nicholas
--
Dr Nicholas M. Glykos, Department of Molecular Biology
and Genetics, Democritus University of Thrace, University Campus,
Dragana, 68100 Alexandroupo
the most highly optimised, readable and
absolutely professional pieces of code.
My twopence,
Nicholas
--
Dr Nicholas M. Glykos, Department of Molecular Biology
and Genetics, Democritus University of Thrace, University Campus,
Dragana, 68100 Alexandroupolis, Greece, Tel/Fa
David Mathog wrote:
Have any of you CUDA folks produced diagnostic programs you run during
"burn in" of new GPU based systems, in order to weed out problem units
before putting them into service?
A while ago I wrote a CUDA implementation of a subset of the Memtest86+
algorithms,to test the r
y tell you more
about the subjest that you'd ever wished to know ;-)
Nicholas
--
Dr Nicholas M. Glykos, Department of Molecular
Biology and Genetics, Democritus University of Thrace,
University Campus, 68100 Alexandroupolis, Greece, Fax +302551030620
Te
On Monday 08 September 2008 21:30:03 Rahul Nabar wrote:
> I was experimenting with using channel bonding my twin eth ports to
> get a combined bandwidth of (close to) 2 Gbps. The two relevant modes
> were 4 (802.3ad) and 6 (alb=Adaptive Load Balancing). I was trying to
> compare performance for bot
Hi all
the November issue of the Communications of the ACM has an
nice paper of Basili and Zelkowitz where they report the following
data, from the paper
Hochstein, L., Carver, J., Shull, F., Asgari, A., Basili, V.,
Hollingsworth, J., and Zelkowitz, M. Parallel programmer productivity
produces...
Best wishes,
Mark Somers.
--
Dr. M. F. Somers
Theoretical Chemistry - Leiden Institute of Chemistry - Leiden University
Einsteinweg 55, P.B. 9502, 2300 RA Leiden, The Netherlands
tel: +31715274437
mail: [EMAIL PROTECTED]
web: http://rulgla.leidenuniv.nl/Researche
I tried to run HPL on a single machine, but it always fails
rtes1:/.../hpl/bin/RTES/ mpirun -np 4 xhpl
p0_30424: p4_error: Path to program is invalid while starting
/.../hpl/bin/RTES/xhpl with rsh on rtes1: -1
p4_error: latest msg from perror: No such file or directory
p0_30235: (45.056821)
I am a newbie in distributed computing... I have a problem of running HPL on a
single machine. Hope I can get help here. Thanks.
After
I install HPL on the machine, I try to run it in the bin dir of HPL by
"mpirun -np 1 xhpl". But it reports "cannot find mpirun command".
Actually I have installed
Hi,
If you have a batch system that can start the MPDs, you should
consider starting the MPI processes directly with the batch system and
providing a separate service to provide the startup information.
You're exactly right. Intel's MPI is derived from MPICH2 and (as we use
PBSPro) OSC's mpiex
Hello,
We are going through a similar experience at one of our customer sites.
They are trying to run Intel MPI on more than 1,000 nodes. Are you
experiencing problems starting the MPD ring? We noticed it takes a
really long time especially when the node count is large. It also just
doesn't w
AM
To: Clements, Brent M (SAIC)
Cc: beowulf@beowulf.org
Subject: Re: [Beowulf] Intel MPI 2.0 mpdboot and large clusters, slow to start
up, sometimes not at all
> Does anyone have any experience running intel mpi over 1000 nodes and do you
> have any tips to speed up task execution? Any tips to
I buddy of mine who has a cluster that is over 1000(2000) nodes.
I've compiled a simple helloworld app to test it out.
I am using Intel MPI 2.0 and running over ethernet so I'm trying both the
ssm(since the nodes are smp machines) and sock devices
i'm doing the following mpdboot -n 1500 --
: Clements, Brent M (SAIC); Jakob Oestergaard; beowulf@beowulf.org
Subject: RE: [Beowulf] Stupid MPI programming question
On Thu, 28 Sep 2006, Michael Will wrote:
> That's wierd. On my scyld cluster it worked fine once I had created
> /tmp// on all compute nodes before running the job
Thu 9/28/2006 8:09 AM
To: Robert G. Brown
Cc: Clements, Brent M (SAIC); beowulf@beowulf.org
Subject: Re: [Beowulf] Stupid MPI programming question
On Thu, Sep 28, 2006 at 08:57:28AM -0400, Robert G. Brown wrote:
> On Thu, 28 Sep 2006, Jakob Oestergaard wrote:
...
> Ah, that's it. I'd
Thank you for
your cooperation.
From: Joe Landman [mailto:[EMAIL PROTECTED]
Sent: Wed 9/27/2006 11:15 PM
To: Clements, Brent M (SAIC)
Cc: Leone B. Bosi; beowulf@beowulf.org
Subject: Re: [Beowulf] Stupid MPI programming question
Clements, Brent M (SAIC) wrote:
> O
Ok, here is the code I'm working withmkdir keeps giving me a -1
failure...can anyone spot what I"m doing wrong?
#include /* all IO stuff lives here */
#include /* exit lives here */
#include /* strcpy lives here */
#include /* MPI and MPI-IO live here */
#include
Hey Guys,
I've been sitting here working for the past 48 hours and I'm fighting a stupid
bug in some mpi code I'm working on
How do I broadcast a char string to my slave mpi processes? And how do I
receive that char string and print it out on my slave mpi process.
This is what I have in m
Intel's NUMA-like solution and also just in general
Thanks
-Original Message-
From: Craig Tierney [mailto:[EMAIL PROTECTED]
Sent: Thursday, September 21, 2006 10:21 AM
To: Clements, Brent M (SAIC)
Cc: beowulf@beowulf.org
Subject: Re: [Beowulf] Your thoughts on use of NUMA-based sy
Title: Your thoughts on use of NUMA-based systems in clusters?
Out of my own curiosity, would those of you that have delt with current/next generation intel based NUMA systems give me your opinions on why/why not you would buy or use them as a cluster node.
I'm looking for primarily technica
My vote is for OpenGFS
Good luck on your project.
___
Beowulf mailing list, Beowulf@beowulf.org
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf
Hi Dan,
If you use PBS/torque or some other batch system that could use it, the
epilogue script in http://bellatrix.pcl.ox.ac.uk/~ben/pbs/ may help you:
" When running parallel jobs on Linux clusters with MPICH and PBS, "slave"
MPICH processes are often left behind on one or nodes at job abor
--Original Message-
From: Mark Hahn [mailto:[EMAIL PROTECTED]
Sent: Thursday, September 07, 2006 3:55 PM
To: Clements, Brent M (SAIC)
Cc: beowulf@beowulf.org
Subject: RE: [Beowulf] NCSU and FORTRAN
> I've had grad students and profs in the past get good results using
> Matlab, in
If she's a student, she can download the intel fortran compilers(I'm
talking about the commandline compilers, not the visual) for free. They
have a number of dev libs that are useful too.
I've had grad students and profs in the past get good results using
Matlab, intel and the intel MKL.
Good Lu
Sounds like you don't need a Beowulf cluster, but what I
call a distributed compute farm or what the marketing buzz calls Utility Grid
Computing. You can install 1 of many job execution environments such as Condor,
Platform, SGE, United Devices Grid MP, etc. etc. to manage
your CPU/memory/di
Policies.
BC
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Clements, Brent M (SAIC)
Sent: Thursday, August 03, 2006 9:03 AM
To: Xu, Jerry; Chris Dagdigian; beowulf@beowulf.org
Subject: RE: [Beowulf] scheduler and perl
If I recall from my LSF days
If I recall from my LSF days, you can limit the number of jobs that a
user can run at one time based upon queue policy.
This is also the case with MAUI/Moab and some other policy-based job
schedulers.
BC
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of X
> Hi Jerry:
>> the other example is that use system call and ssh
>> to each node and run stuff and bypass the scheduler...
Torque 2.1.2 has just been released. It comes with a pam module that, if I
understood it right, makes harder (though not impossible) for users to
bypass the batch system. The
Hi,
I can confirm I have done what Kevin says in this email. At least it
was enough for me to write a shell script that would do
#!/bin/bash
export FOO=bar
mpirun ... my_parallel_app $@
Hope it helps,
-- Diego.
Kevin Ball wrote:
Mathieu,
On Fri, 2006-06-23 at 04:38, mg wrote:
Hello,
I find that most large "supercomputers" are still nothing more than compute
farms that have an execution daemon and policy monitor to manage the compute
farm.
Brent Clements
This message may contain confidential and/or privileged information. If you
are not the addressee or authori
I second pdsh
Brent Clements
This message may contain confidential and/or privileged information. If you
are not the addressee or authorized to receive this for the addressee, you must
not use, copy, disclose, or take any action based on this message or any
information herein. If you
p.
Brent
From: [EMAIL PROTECTED] on behalf of Clements, Brent M (SAIC)
Sent: Tue 2/14/2006 11:43 AM
To: beowulf@beowulf.org
Subject: [Beowulf] Nas parallel benchmarks issue
Hello,
Would it be possible to find out from you guys exactly bench mark I should
build in order to test a 16 way
Hello,
Would it be possible to find out from you guys exactly bench mark I should
build in order to test a 16 way SMP system? And secondly, how should I then run
that benchmark?
I've done the following in NPB3.2-OMP/
make bt CLASS=A
and then run(per the README-3.1 instructions)
setenv
On Mon, Feb 06, 2006 at 04:07:50PM -0800, Donald Becker wrote:
> Nor does "ramdisk root" give you the magic. A ramdisk root is part of how
> we implement the architecture, especially the part about not requiring local
> storage or network file systems to work. (Philosophy: You mount file
> sys
On Thu, Jan 26, 2006 at 04:09:58PM -0700, Warren Turkal wrote:
> Is it ok to mix linux distributions when building a cluster? I am wondering
> for migration purposes. For instance, it the current cluster had FC2 and I
> wanted to move to FC3, would it be okay to install new nodes as FC3 and
> gr
58 matches
Mail list logo