Re: [Beowulf] Refactoring an MPI code to call from another MPI code

2018-05-06 Thread Charlie Peck
It’s hard to tell for certain without looking at the code, but I believe you want to use two different MPI communicators, one for each of the programs, see http://mpitutorial.com/tutorials/introduction-to-groups-and-communicators/ charlie > On May 6, 2018, at 01:10, Navid Shervani-Tabar wrote:

Re: [Beowulf] Troubleshooting NFS stale file handles

2017-04-20 Thread Charlie Peck
+1 for looking at the MTUs. I just finished debugging what was manifesting as transient NFS problems of various types but turned-out to be MTU mis-matches. charlie > On Apr 20, 2017, at 09:51, Gavin W. Burris wrote: > > Remembering that I once had two switches that were not allowing jumbo fram

Re: [Beowulf] clusters of beagles

2017-01-28 Thread Charlie Peck
> On Jan 28, 2017, at 10:04, Lux, Jim (337C) wrote: > On 1/28/17, 6:39 AM, "Skylar Thompson" wrote: > >> On 01/27/2017 12:14 PM, Lux, Jim (337C) wrote: >>> The pack of Beagles do have local disk storage (there¹s a 2GB flash on >>> board with a Debian image that it boots from). >>> >>> The Lit

Re: [Beowulf] Mobos for portable use

2017-01-21 Thread Charlie Peck
> On Jan 21, 2017, at 12:17, Jason Riedy wrote: > > And Scott Hamilton writes: >> These fairly si.ple concept are not even introduced in the >> curriculum until grad school. > > That certainly is not universal. We (Georgia Tech) certainly > have HPC-oriented parallel programming available in th

Re: [Beowulf] Mobos for portable use

2017-01-19 Thread Charlie Peck
We’ve made 2 board units in the past that fit in Pelican’s briefcase form-factor container, it was easy to use them in airports, the back of a VW Vanagon on the way to conferences, etc. Now with NVIDIA and others producing really nice, powerful, small boards boards Jim is correct, it’s possibl

Re: [Beowulf] MPI, fault handling, etc.

2016-03-14 Thread Charlie Peck
> On Mar 14, 2016, at 13:55, Lux, Jim (337C) wrote: > > … And communication, even between nodes of a cluster, isn’t free, nor > infinitely scalable. I think that with a lot of problems, it’s the > communication bottleneck that is the “rate limiting” step, whether it’s > CPU:cache; CPU:RAM; or

Re: [Beowulf] mpi_comm_create scaling issue

2013-09-05 Thread Charlie Peck
On Sep 5, 2013, at 11:56 AM, xingqiu yuan wrote: > Hi ALL > > MPI_COMM_CREATE takes a substantial amount of time on large communicators, > any good ideas to reduce the time consuming on large communicators? Which MPI binding you use can make a difference too, try another one (or two) and see

Re: [Beowulf] why we need cheap, open learning clusters

2013-05-12 Thread Charlie Peck
On May 12, 2013, at 3:11 PM, "Lux, Jim (337C)" wrote: > ... > Of some interest would be whether the LittleFE folks think that using rPis > instead Via Mobos would be worthwhile. By the time you stick a SD card in > the Pi and arrange power supplies, I'm not sure the price difference is > all tha

Re: [Beowulf] Help: Raspberry Pi Cluster

2012-12-14 Thread Charlie Peck
On Dec 13, 2012, at 6:48 PM, Lux, Jim (337C) wrote: > … > Let us not also forget the marketing value of a table full of blinky lights > computers compared to a bunch of boxes displayed on a screen. If you were > trying to sell the concept to be used at full scale with bigger faster nodes, > t

Re: [Beowulf] How to make a BeagleBoard Elastic R Beowulf Cluster in a Briefcase

2010-09-17 Thread Charlie Peck
On Sep 17, 2010, at 12:52 PM, lsi wrote: > But what of homegrown systems that cannot be taken to work, or made > part of a commercial product, that were just made because it could be > done? The Maker community has a lot to say on this point, probably way better than I can, http://makerfaire.c

Re: [Beowulf] How to make a BeagleBoard Elastic R Beowulf Cluster in a Briefcase

2010-09-17 Thread Charlie Peck
> lsi wrote: >> Cute, but my question is, what use is one of these homegrown platforms? How about education, outreach and training? There are at least a couple of projects [1] that use small, home-built clusters in e.g. for undergraduate CS education, faculty education/re-training for parallel

Re: [Beowulf] wall clock time for mpi_allreduce?

2010-09-12 Thread Charlie Peck
On Sep 10, 2010, at 10:46 PM, xingqiu yuan wrote: > Hi > > I found that use of mpi_allreduce to calculate the global maximum and > minimum takes very long time, any better alternatives to calculate the > global maximum/minimum values? If only the rank 0 process needs to know the global max and m

Re: [Beowulf] MPI + CUDA codes

2009-06-15 Thread Charlie Peck
On Jun 12, 2009, at 7:54 PM, Brock Palen wrote: I think the Namd folks had a paper and data from real running code at SC last year. Check with them. Their paper from SC08 is here: http://mc.stanford.edu/cgi-bin/images/8/8a/SC08_NAMD.pdf charlie

Re: [Beowulf] Station wagon full of tapes

2009-05-26 Thread Charlie Peck
On May 26, 2009, at 11:16 AM, Robert G. Brown wrote: Sure, but why wouldn't it be cheaper for e.g. NSF or NIH to fund an exact clone of the service Amazon plans to offer and provide it for free to its supported research groups (or rather, do bookkeeping but it is all internal bookkeeping, mov

[Beowulf] Workshops for parallel/distributed computing, computational thinking, etc. (free)

2009-04-20 Thread Charlie Peck
/cluster/computational material. Questions can be directed to me or to worksh...@sc-education.org Charlie Peck SC Education Program Parallel Programming and Cluster Computing June 7-13: Kean University July 5-11: Louisiana State University August 9-15: U Oklahoma Introduction to Computational

Re: [Beowulf] What is the right lubricant for computer rack sliding rails?

2009-02-11 Thread Charlie Peck
On Feb 11, 2009, at 11:56 AM, Skylar Thompson wrote: dan.kid...@quadrics.com wrote: Kilian, Well you shouldn't be using your bare fingers. Everyone has their own preferred trick. I put a small straight blade screwdriver in the hole, and then pop in the cage nut by hand using the screwdrive

Re: [Beowulf] Best training for rusty HPC skills?

2008-06-21 Thread Charlie Peck
version=3.1.3 X-Spam-Checker-Version: SpamAssassin 3.1.3 (2006-06-01) on quark.cs.earlham.edu On Jun 20, 2008, at 1:53 PM, Gregory R. Warnes, Ph.D. wrote: I've just been appointed to head an acadmeic computing center, after an absence from the HPC arena affor 10 years. Wha

Re: [Beowulf] ECC Scrub, which setting?

2008-05-19 Thread Charlie Peck
version=3.1.3 X-Spam-Checker-Version: SpamAssassin 3.1.3 (2006-06-01) on quark.cs.earlham.edu On May 19, 2008, at 7:04 PM, Greg Lindahl wrote: It must suck when you lose tenure for publishing a wrong paper. If that's all it took to loose tenure there would be a lot more o

[Beowulf] Summer workshops in parallel and distributed computing and computational science

2008-05-09 Thread Charlie Peck
version=3.1.3 X-Spam-Checker-Version: SpamAssassin 3.1.3 (2006-06-01) on quark.cs.earlham.edu Slightly off-topic (but not too far): The SuperComputing (SC) Education Program is a year-long program working with undergraduate faculty, administrators, college students, and collab

Re: [Beowulf] OpenMP vs. MPI benchmark for multi-core machines?

2008-04-17 Thread Charlie Peck
On Apr 17, 2008, at 8:38 AM, Eray Ozkural wrote: Is there such a benchmark that I can refer to? I am increasingly convinced that OpenMP/pthread is required only in extreme cases, but I need some numbers to prove that (to myself and my advisor). Like most other cases YMMV, significantly. That

Re: [Beowulf] Really efficient MPIs??

2007-11-28 Thread Charlie Peck
On Nov 28, 2007, at 8:04 AM, Jeffrey B. Layton wrote: If you don't want to pay money for an MPI, then go with Open-MPI. It too can run on various networks without recompiling. Plus it's open-source. Unless you are using a gigabit ethernet, Open-MPI is noticeably less efficient that LAM-MPI o

Re: [Beowulf] Really efficient MPIs??

2007-11-28 Thread Charlie Peck
On Nov 28, 2007, at 12:31 AM, amjad ali wrote: Hello, Because today the clusters with multicore nodes are quite common and the cores within a node share memory. Which Implementations of MPI (no matter commercial or free), make automatic and efficient use of shared memory for message passi

Re: [Beowulf] microWulf

2007-11-22 Thread Charlie Peck
On Nov 18, 2007, at 9:13 PM, Donald Shillady wrote: 1. It appears that the microWulf system could be extended to 16 CPU using quad chips. Would it be simpler to use just two of the faster Intel Core-2 Quad chips to achieve an 8-node system? Maybe it is much cheaper to use the older techno

[Beowulf] MPI for Python?

2007-11-02 Thread Charlie Peck
We'd like to start using MPI with Python. We've found a number of different bindings, mympi and pympi seem to be the most commonly used but there are others as well. We're not Python experts and were wondering what others might suggest is the "best" MPI binding to use with Python. thank

Re: [Beowulf] HPL on an ad-hoc cluster

2007-03-08 Thread Charlie Peck
On Mar 7, 2007, at 11:12 AM, Olli-Pekka Lehto wrote: ... So, do you think that is this a pipe dream or a feasible project? Which path would you take to implement this? Consider something embarrassingly parallel with a work-pool model. Your assignment servers could be on stable machines, c

Re: [Beowulf] network filesystem

2007-03-06 Thread Charlie Peck
erience with it. It looks like there are 3 primary directories, the software root, the tmp dir, and the molecular system/ output files. Which subset of these shouldn't be accessed via NFS? thanks, charlie Charlie Peck Computer Science, Earlham College http://cs.earlham.edu h

Re: [Beowulf] Re: Cluster newbie, power recommendations

2006-03-21 Thread Charlie Peck
On Mar 21, 2006, at 12:35 PM, David Mathog wrote: Charlie Peck <[EMAIL PROTECTED]> wrote I think clusters like the one Eric wants to build have /significant/ educational value, both in the building and the use. How else does one learn to do parallel/distributed programming if no

Re: [Beowulf] Cluster newbie, power recommendations

2006-03-20 Thread Charlie Peck
On Mar 20, 2006, at 6:50 PM, Robert G. Brown wrote: On Sun, 19 Mar 2006, Eric Geater at Home wrote: Howdy, everyone! Maybe this is a question better suited for hardware heads, but I've become Beowulf curious, and am interested in learning a hardware question. I have access to a bunch of ol

Re: [Beowulf] about clusters in high schools

2006-01-27 Thread Charlie Peck
On Jan 26, 2006, at 10:25 PM, H.Vidal, Jr. wrote: Howdy. ... So there you go, I have thrown out the first chip. Any takers to place a comment or two? Check-out the Shodor Education Foundation/NCSI, they have a number of high-school and I think also middle school programs for computational