Re: [Beowulf] First cluster in 20 years - questions about today

2020-02-07 Thread Nicholas M. Glykos
i is an example). So, fuzziness all around. Life. -- Nicholas M. Glykos, Department of Molecular Biology and Genetics, Democritus University of Thrace, University Campus, Dragana, 68100 Alexandroupolis, Greece, Tel/Fax (office) +302551030620, Ext.77620, Tel (lab) +3

Re: [Beowulf] Anybody here still use SystemImager?

2019-02-26 Thread m . somers
We went from systemimager, that still worked fine on CentOS 6, to a kickstart automated install on CentOS 7 after checking several options. The problems I encountered with systemimager, is that the 2.6.x kernel used in them boel and suplied kernels, does not load the 3.x kernel modules and includi

Re: [Beowulf] HPC Workflows

2018-12-01 Thread m . somers
o running my Molecular Quantum Dynamics I am. m. -- mark somers tel: +31715274437 mail: m.som...@chem.leidenuniv.nl web: http://theorchem.leidenuniv.nl/people/somers ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To change

Re: [Beowulf] Fortran is awesome

2018-11-29 Thread m . somers
http://www.cs.rpi.edu/~szymansk/OOF90/bugs.html -- mark somers tel: +31715274437 mail: m.som...@chem.leidenuniv.nl web: http://theorchem.leidenuniv.nl/people/somers ___ Beowulf mailing list, Beowulf@beowulf.org sponsored by Penguin Computing To chan

[Beowulf] CentOS 7.x for cluster nodes

2016-12-30 Thread m . somers
list for building a cluster but it is rough and partly in dutch. That, together with some example config files, allows me to setup a cluster within three days from scratch. Please mail me if you are interested. m. -- mark somers tel: +31715274437 mail: m.som...@chem.leidenuniv.nl web: http://the

Re: [Beowulf] Slide on big data

2014-02-19 Thread G . M . Sigut
n your code on 500 processors. Sign up for a free trial account. www.sabalcore.com 877-492-8027 ext. 11 -- -- >>>>>>>>>>>>>>>>>>>>>> George M. Sigut <<<<<<<<<<<<<<<<<<<<

Re: [Beowulf] i7-4770R 128MB L4 cache CPU in compact 0.79 litre box - DIY cluster?

2014-01-21 Thread carlos m ceballos
*hi! I have problem with openssh, if a create key in master to comunicate with nodo, I create key public and create file authorized_keys, I send file authorized_keys of master to nodo:* *#scp -r /root/.ssh clisterX:/root/.ssh* *later I try connect to nodo but, I have that put key :( * *when I cre

[Beowulf] Heterogeneous, intermitent beowulf cluster administration

2013-09-26 Thread Ivan M
Hi folks, I have access to a bunch (around 20) machines in our lab, each one with a particular configuration, usually some combination of Core i5/i7 and 4GB/8GB/16GB RAM (the "heterogeneous" part), connected by a 24 ports Cisco switch with reasonable backplane. They're end user machines, but with

Re: [Beowulf] Stop the noise please

2013-05-13 Thread Nicholas M Glykos
> Enough Vincent - as of now you are now moderated. And so it goes ... ;-) -- Nicholas M. Glykos, Department of Molecular Biology and Genetics, Democritus University of Thrace, University Campus, Dragana, 68100 Alexandroupolis, Greece, Tel/Fax (office) +302551030

Re: [Beowulf] Definition of HPC

2013-04-19 Thread Nicholas M Glykos
> cat /dev/zero | sudo tee /dev/sda Talking about scissors and X-ray generators ... -- Nicholas M. Glykos, Department of Molecular Biology and Genetics, Democritus University of Thrace, University Campus, Dragana, 68100 Alexandroupolis, Greece, Tel/Fax (off

Re: [Beowulf] Definition of HPC

2013-04-18 Thread Nicholas M Glykos
> I term this article "fun with sudo, or how to drive down I95 at 65mph > while holding scissors transporting your x-ray device" :- -- Nicholas M. Glykos, Department of Molecular Biology and Genetics, Democritus University of Thrace, University C

Re: [Beowulf] Definition of HPC

2013-04-18 Thread Nicholas M Glykos
toy cluster, and not the central cluster of their university. -- Nicholas M. Glykos, Department of Molecular Biology and Genetics, Democritus University of Thrace, University Campus, Dragana, 68100 Alexandroupolis, Greece, Tel/Fax (office) +302551030620, Ext

Re: [Beowulf] Definition of HPC

2013-04-18 Thread Nicholas M Glykos
the admins explain to the other users that their jobs died because they gave you root access ? -- Nicholas M. Glykos, Department of Molecular Biology and Genetics, Democritus University of Thrace, University Campus, Dragana, 68100 Alexandroupolis, Greece, Tel/Fax (off

Re: [Beowulf] Definition of HPC

2013-04-18 Thread Nicholas M Glykos
access to the general users of a shared computing facility. My twocents, Nicholas -- Nicholas M. Glykos, Department of Molecular Biology and Genetics, Democritus University of Thrace, University Campus, Dragana, 68100 Alexandroupolis, Greece, Tel/Fax (office) +302551

[Beowulf] K Computer built for speed, not use

2012-10-10 Thread Ivan M
http://www.sciencemag.org/content/338/6103/26.full?rss=1 "Japan's K computer made headlines in June 2011 as the world's fastest supercomputer and again last November when it became the first computer to top 10 petaflops—or 10 quadrillion calculations per second—solving a benchmark mathematical pro

Re: [Beowulf] Servers Too Hot? Intel Recommends a Luxurious Oil Bath

2012-09-05 Thread Nicholas M Glykos
make a putative mini-research-cluster solution. Apologies for not offering to do it myself, I wouldn't know where to start from ... -- Nicholas M. Glykos, Department of Molecular Biology and Genetics, Democritus University of Thrace, University Campus, Dragana, 68100 Al

Re: [Beowulf] Signal to noise.

2012-01-27 Thread Nicholas M Glykos
Hi Joe, > I've found that filters help. You are killing my daily digests. > If you are afflicted with Microsoft ... What is 'Microsoft' ? :-) All the best (and apologies to the list for the email traffic), Nicholas -- Nicholas M. Glykos, Department

[Beowulf] Signal to noise.

2012-01-27 Thread Nicholas M Glykos
re you all know--- significantly reduced the signal-to-noise ratio. Can we get back to normal, please ? Thanks, Nicholas -- Nicholas M. Glykos, Department of Molecular Biology and Genetics, Democritus University of Thrace, University Campus, Dragana, 68100 Alexandroupolis, Greec

Re: [Beowulf] Users abusing screen

2011-10-28 Thread Nicholas M Glykos
raints is a balancing act between crippling creativity (and making power users mad) and avoiding equipment misuse, but clearly, there are limits in the freedom of use (for example, you wouldn't add all cluster users to your sudo list). My twocents, Nicholas -- Dr Ni

Re: [Beowulf] Users abusing screen

2011-10-27 Thread Nicholas M Glykos
s (allocated through slurm). The principal idea ["you are welcome to be bring your allocated node (and, thus, your job) to a halt if that's what you want"], sounds pedagogically attractive ... ;-) Nicholas -- Dr Nicholas M. Glykos, Department of Molecular Biology

Re: [Beowulf] GPU

2010-09-01 Thread Nicholas M Glykos
being productive, no matter what the assigned topic is. Unfortunately (and as usually happens with all aphorisms), the inverse statement is also true .-) My twocents, Nicholas -- Dr Nicholas M. Glykos, Department of Molecular Biology and Genetics, Democritus University of Thra

Re: [Beowulf] Peformance penalty when using 128-bit reals on AMD64

2010-06-25 Thread Nicholas M Glykos
> http://arxiv.org/abs/cond-mat/0506786, sorry, nobody ever gets to talk > about their thesis...). :-)) (sorry, sorry, I couldn't resist the temptation). -- Dr Nicholas M. Glykos, Department of Molecular Biology and Genetics, Democritus University of Thrace,

Re: [Beowulf] How do I work around this patent?

2009-09-23 Thread m
* Jeremy Baker (jello...@gmail.com) [090922 21:17]: > >Can someone help me to better understand how these patents interact with >the open source bazaar method of programing, Linux, the law, GIS systems >with meta data that is essentially 3-D access for a user's avatar, etc? I >am h

Re: [Beowulf] Intra-cluster security

2009-09-13 Thread Nicholas M Glykos
his is through a pam module for slurm that only allows ssh access to those nodes that a user has active jobs on). Nicholas -- Dr Nicholas M. Glykos, Department of Molecular Biology and Genetics, Democritus University of Thrace, University Campus, Dragana, 68100 Alexandroupo

Re: [Beowulf] Beowulf SysAdmin Job Description

2009-05-07 Thread Nicholas M Glykos
the most highly optimised, readable and absolutely professional pieces of code. My twopence, Nicholas -- Dr Nicholas M. Glykos, Department of Molecular Biology and Genetics, Democritus University of Thrace, University Campus, Dragana, 68100 Alexandroupolis, Greece, Tel/Fa

Re: [Beowulf] GPU diagnostics?

2009-03-31 Thread M J Harvey
David Mathog wrote: Have any of you CUDA folks produced diagnostic programs you run during "burn in" of new GPU based systems, in order to weed out problem units before putting them into service? A while ago I wrote a CUDA implementation of a subset of the Memtest86+ algorithms,to test the r

Re: [Beowulf] Problems scaling performance to more than one node, GbE

2009-02-14 Thread Nicholas M Glykos
y tell you more about the subjest that you'd ever wished to know ;-) Nicholas -- Dr Nicholas M. Glykos, Department of Molecular Biology and Genetics, Democritus University of Thrace, University Campus, 68100 Alexandroupolis, Greece, Fax +302551030620 Te

Re: [Beowulf] ethernet bonding performance comparison "802.3ad" vs Adaptive Load Balancing

2008-09-17 Thread Diego M. Vadell
On Monday 08 September 2008 21:30:03 Rahul Nabar wrote: > I was experimenting with using channel bonding my twin eth ports to > get a combined bandwidth of (close to) 2 Gbps. The two relevant modes > were 4 (802.3ad) and 6 (alb=Adaptive Load Balancing). I was trying to > compare performance for bot

[Beowulf] interesting paper on experimental computer science

2007-11-24 Thread m
Hi all the November issue of the Communications of the ACM has an nice paper of Basili and Zelkowitz where they report the following data, from the paper Hochstein, L., Carver, J., Shull, F., Asgari, A., Basili, V., Hollingsworth, J., and Zelkowitz, M. Parallel programmer productivity

[Beowulf] Web interfaces to clusters

2007-10-29 Thread Dr. M. F. Somers
produces... Best wishes, Mark Somers. -- Dr. M. F. Somers Theoretical Chemistry - Leiden Institute of Chemistry  - Leiden University Einsteinweg 55, P.B. 9502, 2300 RA Leiden, The Netherlands tel: +31715274437 mail: [EMAIL PROTECTED] web:  http://rulgla.leidenuniv.nl/Researche

[Beowulf] how to run HPL on a single machine

2007-05-16 Thread M C
I tried to run HPL on a single machine, but it always fails rtes1:/.../hpl/bin/RTES/ mpirun -np 4 xhpl p0_30424: p4_error: Path to program is invalid while starting /.../hpl/bin/RTES/xhpl with rsh on rtes1: -1 p4_error: latest msg from perror: No such file or directory p0_30235: (45.056821)

[Beowulf] a question about running HPL

2007-05-12 Thread M C
I am a newbie in distributed computing... I have a problem of running HPL on a single machine. Hope I can get help here. Thanks. After I install HPL on the machine, I try to run it in the bin dir of HPL by "mpirun -np 1 xhpl". But it reports "cannot find mpirun command". Actually I have installed

Re: [Beowulf] Intel MPI 2.0 mpdboot and large clusters, slow tostart up, sometimes not at all

2006-10-05 Thread M J Harvey
Hi, If you have a batch system that can start the MPDs, you should consider starting the MPI processes directly with the batch system and providing a separate service to provide the startup information. You're exactly right. Intel's MPI is derived from MPICH2 and (as we use PBSPro) OSC's mpiex

Re: [Beowulf] Intel MPI 2.0 mpdboot and large clusters, slow tostart up, sometimes not at all

2006-10-04 Thread M J Harvey
Hello, We are going through a similar experience at one of our customer sites. They are trying to run Intel MPI on more than 1,000 nodes. Are you experiencing problems starting the MPD ring? We noticed it takes a really long time especially when the node count is large. It also just doesn't w

RE: [Beowulf] Intel MPI 2.0 mpdboot and large clusters, slow to start up, sometimes not at all

2006-09-29 Thread Clements, Brent M \(SAIC\)
AM To: Clements, Brent M (SAIC) Cc: beowulf@beowulf.org Subject: Re: [Beowulf] Intel MPI 2.0 mpdboot and large clusters, slow to start up, sometimes not at all > Does anyone have any experience running intel mpi over 1000 nodes and do you > have any tips to speed up task execution? Any tips to

[Beowulf] Intel MPI 2.0 mpdboot and large clusters, slow to start up, sometimes not at all

2006-09-29 Thread Clements, Brent M \(SAIC\)
I buddy of mine who has a cluster that is over 1000(2000) nodes. I've compiled a simple helloworld app to test it out. I am using Intel MPI 2.0 and running over ethernet so I'm trying both the ssm(since the nodes are smp machines) and sock devices i'm doing the following mpdboot -n 1500 --

RE: [Beowulf] Stupid MPI programming question

2006-09-28 Thread Clements, Brent M \(SAIC\)
: Clements, Brent M (SAIC); Jakob Oestergaard; beowulf@beowulf.org Subject: RE: [Beowulf] Stupid MPI programming question On Thu, 28 Sep 2006, Michael Will wrote: > That's wierd. On my scyld cluster it worked fine once I had created > /tmp// on all compute nodes before running the job

RE: [Beowulf] Stupid MPI programming question

2006-09-28 Thread Clements, Brent M \(SAIC\)
Thu 9/28/2006 8:09 AM To: Robert G. Brown Cc: Clements, Brent M (SAIC); beowulf@beowulf.org Subject: Re: [Beowulf] Stupid MPI programming question On Thu, Sep 28, 2006 at 08:57:28AM -0400, Robert G. Brown wrote: > On Thu, 28 Sep 2006, Jakob Oestergaard wrote: ... > Ah, that's it. I'd

RE: [Beowulf] Stupid MPI programming question

2006-09-27 Thread Clements, Brent M \(SAIC\)
Thank you for your cooperation. From: Joe Landman [mailto:[EMAIL PROTECTED] Sent: Wed 9/27/2006 11:15 PM To: Clements, Brent M (SAIC) Cc: Leone B. Bosi; beowulf@beowulf.org Subject: Re: [Beowulf] Stupid MPI programming question Clements, Brent M (SAIC) wrote: > O

RE: [Beowulf] Stupid MPI programming question

2006-09-27 Thread Clements, Brent M \(SAIC\)
Ok, here is the code I'm working withmkdir keeps giving me a -1 failure...can anyone spot what I"m doing wrong? #include /* all IO stuff lives here */ #include /* exit lives here */ #include /* strcpy lives here */ #include /* MPI and MPI-IO live here */ #include

[Beowulf] Stupid MPI programming question

2006-09-27 Thread Clements, Brent M \(SAIC\)
Hey Guys, I've been sitting here working for the past 48 hours and I'm fighting a stupid bug in some mpi code I'm working on How do I broadcast a char string to my slave mpi processes? And how do I receive that char string and print it out on my slave mpi process. This is what I have in m

RE: [Beowulf] Your thoughts on use of NUMA-based systems in clusters?

2006-09-21 Thread Clements, Brent M \(SAIC\)
Intel's NUMA-like solution and also just in general Thanks -Original Message- From: Craig Tierney [mailto:[EMAIL PROTECTED] Sent: Thursday, September 21, 2006 10:21 AM To: Clements, Brent M (SAIC) Cc: beowulf@beowulf.org Subject: Re: [Beowulf] Your thoughts on use of NUMA-based sy

[Beowulf] Your thoughts on use of NUMA-based systems in clusters?

2006-09-21 Thread Clements, Brent M \(SAIC\)
Title: Your thoughts on use of NUMA-based systems in clusters? Out of my own curiosity, would those of you that have delt with current/next generation intel based NUMA systems give me your opinions on why/why not you would buy or use them as a cluster node. I'm looking for primarily technica

RE: [Beowulf] GPFS on Linux (x86)

2006-09-15 Thread Clements, Brent M \(SAIC\)
My vote is for OpenGFS Good luck on your project. ___ Beowulf mailing list, Beowulf@beowulf.org To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

Re: [Beowulf] Killing may user jobs on many compute nodes

2006-09-13 Thread Diego M. Vadell
Hi Dan, If you use PBS/torque or some other batch system that could use it, the epilogue script in http://bellatrix.pcl.ox.ac.uk/~ben/pbs/ may help you: " When running parallel jobs on Linux clusters with MPICH and PBS, "slave" MPICH processes are often left behind on one or nodes at job abor

RE: [Beowulf] NCSU and FORTRAN

2006-09-07 Thread Clements, Brent M \(SAIC\)
--Original Message- From: Mark Hahn [mailto:[EMAIL PROTECTED] Sent: Thursday, September 07, 2006 3:55 PM To: Clements, Brent M (SAIC) Cc: beowulf@beowulf.org Subject: RE: [Beowulf] NCSU and FORTRAN > I've had grad students and profs in the past get good results using > Matlab, in

RE: [Beowulf] NCSU and FORTRAN

2006-09-07 Thread Clements, Brent M \(SAIC\)
If she's a student, she can download the intel fortran compilers(I'm talking about the commandline compilers, not the visual) for free. They have a number of dev libs that are useful too. I've had grad students and profs in the past get good results using Matlab, intel and the intel MKL. Good Lu

RE: [Beowulf] Create cluster : questions

2006-09-07 Thread Clements, Brent M \(SAIC\)
Sounds like you don't need a Beowulf cluster, but what I call a distributed compute farm or what the marketing buzz calls Utility Grid Computing. You can install 1 of many job execution environments such as Condor, Platform, SGE, United Devices Grid MP, etc. etc. to manage your CPU/memory/di

RE: [Beowulf] scheduler and perl

2006-08-03 Thread Clements, Brent M \(SAIC\)
Policies. BC -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Clements, Brent M (SAIC) Sent: Thursday, August 03, 2006 9:03 AM To: Xu, Jerry; Chris Dagdigian; beowulf@beowulf.org Subject: RE: [Beowulf] scheduler and perl If I recall from my LSF days

RE: [Beowulf] scheduler and perl

2006-08-03 Thread Clements, Brent M \(SAIC\)
If I recall from my LSF days, you can limit the number of jobs that a user can run at one time based upon queue policy. This is also the case with MAUI/Moab and some other policy-based job schedulers. BC -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of X

Re: [Beowulf] scheduler and perl

2006-08-01 Thread Diego M. Vadell
> Hi Jerry: >> the other example is that use system call and ssh >> to each node and run stuff and bypass the scheduler... Torque 2.1.2 has just been released. It comes with a pam module that, if I understood it right, makes harder (though not impossible) for users to bypass the batch system. The

Re: [Beowulf] MPI2-standard to lauch "mpirun -np 2 myshellscript.sh"

2006-07-06 Thread Diego M. Vadell
Hi, I can confirm I have done what Kevin says in this email. At least it was enough for me to write a shell script that would do #!/bin/bash export FOO=bar mpirun ... my_parallel_app $@ Hope it helps, -- Diego. Kevin Ball wrote: Mathieu, On Fri, 2006-06-23 at 04:38, mg wrote: Hello,

RE: [Beowulf] 512 nodes Myrinet cluster Challanges

2006-05-03 Thread Clements, Brent M \(SAIC\)
I find that most large "supercomputers" are still nothing more than compute farms that have an execution daemon and policy monitor to manage the compute farm. Brent Clements This message may contain confidential and/or privileged information. If you are not the addressee or authori

RE: [Beowulf] running out of rsh ports

2006-05-03 Thread Clements, Brent M \(SAIC\)
I second pdsh Brent Clements This message may contain confidential and/or privileged information. If you are not the addressee or authorized to receive this for the addressee, you must not use, copy, disclose, or take any action based on this message or any information herein. If you

RE: [Beowulf] Nas parallel benchmarks issue

2006-02-21 Thread Clements, Brent M \(SAIC\)
p. Brent From: [EMAIL PROTECTED] on behalf of Clements, Brent M (SAIC) Sent: Tue 2/14/2006 11:43 AM To: beowulf@beowulf.org Subject: [Beowulf] Nas parallel benchmarks issue Hello, Would it be possible to find out from you guys exactly bench mark I should build in order to test a 16 way

[Beowulf] Nas parallel benchmarks issue

2006-02-15 Thread Clements, Brent M \(SAIC\)
Hello, Would it be possible to find out from you guys exactly bench mark I should build in order to test a 16 way SMP system? And secondly, how should I then run that benchmark? I've done the following in NPB3.2-OMP/ make bt CLASS=A and then run(per the README-3.1 instructions) setenv

Re: [Beowulf] distributions

2006-02-09 Thread Greg M. Kurtzer
On Mon, Feb 06, 2006 at 04:07:50PM -0800, Donald Becker wrote: > Nor does "ramdisk root" give you the magic. A ramdisk root is part of how > we implement the architecture, especially the part about not requiring local > storage or network file systems to work. (Philosophy: You mount file > sys

Re: [Beowulf] distributions

2006-02-01 Thread Greg M. Kurtzer
On Thu, Jan 26, 2006 at 04:09:58PM -0700, Warren Turkal wrote: > Is it ok to mix linux distributions when building a cluster? I am wondering > for migration purposes. For instance, it the current cluster had FC2 and I > wanted to move to FC3, would it be okay to install new nodes as FC3 and > gr