It’s hard to tell for certain without looking at the code, but I believe you
want to use two different MPI communicators, one for each of the programs, see
http://mpitutorial.com/tutorials/introduction-to-groups-and-communicators/
charlie
> On May 6, 2018, at 01:10, Navid Shervani-Tabar wrote:
+1 for looking at the MTUs. I just finished debugging what was manifesting as
transient NFS problems of various types but turned-out to be MTU mis-matches.
charlie
> On Apr 20, 2017, at 09:51, Gavin W. Burris wrote:
>
> Remembering that I once had two switches that were not allowing jumbo fram
> On Jan 28, 2017, at 10:04, Lux, Jim (337C) wrote:
> On 1/28/17, 6:39 AM, "Skylar Thompson" wrote:
>
>> On 01/27/2017 12:14 PM, Lux, Jim (337C) wrote:
>>> The pack of Beagles do have local disk storage (there¹s a 2GB flash on
>>> board with a Debian image that it boots from).
>>>
>>> The Lit
> On Jan 21, 2017, at 12:17, Jason Riedy wrote:
>
> And Scott Hamilton writes:
>> These fairly si.ple concept are not even introduced in the
>> curriculum until grad school.
>
> That certainly is not universal. We (Georgia Tech) certainly
> have HPC-oriented parallel programming available in th
We’ve made 2 board units in the past that fit in Pelican’s briefcase
form-factor container, it was easy to use them in airports, the back of a VW
Vanagon on the way to conferences, etc.
Now with NVIDIA and others producing really nice, powerful, small boards boards
Jim is correct, it’s possibl
> On Mar 14, 2016, at 13:55, Lux, Jim (337C) wrote:
>
> … And communication, even between nodes of a cluster, isn’t free, nor
> infinitely scalable. I think that with a lot of problems, it’s the
> communication bottleneck that is the “rate limiting” step, whether it’s
> CPU:cache; CPU:RAM; or
On Sep 5, 2013, at 11:56 AM, xingqiu yuan wrote:
> Hi ALL
>
> MPI_COMM_CREATE takes a substantial amount of time on large communicators,
> any good ideas to reduce the time consuming on large communicators?
Which MPI binding you use can make a difference too, try another one (or two)
and see
On May 12, 2013, at 3:11 PM, "Lux, Jim (337C)" wrote:
> ...
> Of some interest would be whether the LittleFE folks think that using rPis
> instead Via Mobos would be worthwhile. By the time you stick a SD card in
> the Pi and arrange power supplies, I'm not sure the price difference is
> all tha
On Dec 13, 2012, at 6:48 PM, Lux, Jim (337C) wrote:
> …
> Let us not also forget the marketing value of a table full of blinky lights
> computers compared to a bunch of boxes displayed on a screen. If you were
> trying to sell the concept to be used at full scale with bigger faster nodes,
> t
On Sep 17, 2010, at 12:52 PM, lsi wrote:
> But what of homegrown systems that cannot be taken to work, or made
> part of a commercial product, that were just made because it could be
> done?
The Maker community has a lot to say on this point, probably way better than I
can, http://makerfaire.c
> lsi wrote:
>> Cute, but my question is, what use is one of these homegrown platforms?
How about education, outreach and training? There are at least a couple of
projects [1] that use small, home-built clusters in e.g. for undergraduate CS
education, faculty education/re-training for parallel
On Sep 10, 2010, at 10:46 PM, xingqiu yuan wrote:
> Hi
>
> I found that use of mpi_allreduce to calculate the global maximum and
> minimum takes very long time, any better alternatives to calculate the
> global maximum/minimum values?
If only the rank 0 process needs to know the global max and m
On Jun 12, 2009, at 7:54 PM, Brock Palen wrote:
I think the Namd folks had a paper and data from real running code
at SC last year. Check with them.
Their paper from SC08 is here:
http://mc.stanford.edu/cgi-bin/images/8/8a/SC08_NAMD.pdf
charlie
On May 26, 2009, at 11:16 AM, Robert G. Brown wrote:
Sure, but why wouldn't it be cheaper for e.g. NSF or NIH to fund an
exact clone of the service Amazon plans to offer and provide it for
free
to its supported research groups (or rather, do bookkeeping but it is
all internal bookkeeping, mov
/cluster/computational material.
Questions can be directed to me or to worksh...@sc-education.org
Charlie Peck
SC Education Program
Parallel Programming and Cluster Computing
June 7-13: Kean University
July 5-11: Louisiana State University
August 9-15: U Oklahoma
Introduction to Computational
On Feb 11, 2009, at 11:56 AM, Skylar Thompson wrote:
dan.kid...@quadrics.com wrote:
Kilian,
Well you shouldn't be using your bare fingers.
Everyone has their own preferred trick. I put a small straight
blade screwdriver in the hole, and then pop in the cage nut by
hand using the screwdrive
version=3.1.3
X-Spam-Checker-Version: SpamAssassin 3.1.3 (2006-06-01) on
quark.cs.earlham.edu
On Jun 20, 2008, at 1:53 PM, Gregory R. Warnes, Ph.D. wrote:
I've just been appointed to head an acadmeic computing center,
after an absence from the HPC arena affor 10 years. Wha
version=3.1.3
X-Spam-Checker-Version: SpamAssassin 3.1.3 (2006-06-01) on
quark.cs.earlham.edu
On May 19, 2008, at 7:04 PM, Greg Lindahl wrote:
It must suck when you lose tenure for publishing a wrong paper.
If that's all it took to loose tenure there would be a lot more
o
version=3.1.3
X-Spam-Checker-Version: SpamAssassin 3.1.3 (2006-06-01) on
quark.cs.earlham.edu
Slightly off-topic (but not too far):
The SuperComputing (SC) Education Program is a year-long program working
with undergraduate faculty, administrators, college students, and
collab
On Apr 17, 2008, at 8:38 AM, Eray Ozkural wrote:
Is there such a benchmark that I can refer to? I am increasingly
convinced that OpenMP/pthread is required only in extreme cases, but I
need some numbers to prove that (to myself and my advisor).
Like most other cases YMMV, significantly. That
On Nov 28, 2007, at 8:04 AM, Jeffrey B. Layton wrote:
If you don't want to pay money for an MPI, then go with Open-MPI.
It too can run on various networks without recompiling. Plus it's
open-source.
Unless you are using a gigabit ethernet, Open-MPI is noticeably less
efficient that LAM-MPI o
On Nov 28, 2007, at 12:31 AM, amjad ali wrote:
Hello,
Because today the clusters with multicore nodes are quite common
and the cores within a node share memory.
Which Implementations of MPI (no matter commercial or free), make
automatic and efficient use of shared memory for message passi
On Nov 18, 2007, at 9:13 PM, Donald Shillady wrote:
1. It appears that the microWulf system could be extended to 16 CPU
using quad chips. Would it be simpler to use just two of the
faster Intel Core-2 Quad chips to achieve an 8-node system? Maybe
it is much cheaper to use the older techno
We'd like to start using MPI with Python. We've found a number of
different bindings, mympi and pympi seem to be the most commonly used
but there are others as well. We're not Python experts and were
wondering what others might suggest is the "best" MPI binding to use
with Python.
thank
On Mar 7, 2007, at 11:12 AM, Olli-Pekka Lehto wrote:
...
So, do you think that is this a pipe dream or a feasible project?
Which path would you take to implement this?
Consider something embarrassingly parallel with a work-pool model.
Your assignment servers could be on stable machines, c
erience with it. It looks like there are 3 primary
directories, the software root, the tmp dir, and the molecular system/
output files. Which subset of these shouldn't be accessed via NFS?
thanks,
charlie
Charlie Peck
Computer Science, Earlham College
http://cs.earlham.edu
h
On Mar 21, 2006, at 12:35 PM, David Mathog wrote:
Charlie Peck <[EMAIL PROTECTED]> wrote
I think clusters like the one Eric wants to build have /significant/
educational value, both in the building and the use. How else does
one
learn to do parallel/distributed programming if no
On Mar 20, 2006, at 6:50 PM, Robert G. Brown wrote:
On Sun, 19 Mar 2006, Eric Geater at Home wrote:
Howdy, everyone!
Maybe this is a question better suited for hardware heads, but I've
become
Beowulf curious, and am interested in learning a hardware question.
I have access to a bunch of ol
On Jan 26, 2006, at 10:25 PM, H.Vidal, Jr. wrote:
Howdy.
...
So there you go, I have thrown out the first chip. Any takers to place
a comment or two?
Check-out the Shodor Education Foundation/NCSI, they have a number of
high-school and I think also middle school programs for computational
29 matches
Mail list logo