Re: [Beowulf] Parallel Programming Question

2009-07-01 Thread Gus Correa
Hi Bogdan, list Bogdan Costescu wrote: On Tue, 30 Jun 2009, Gus Correa wrote: My answers were given in the context of Amjad's original questions Sorry, I missed somehow the context for the questions. Still, the thoughts about I/O programming are general in nature, so they would apply in an

Re: [Beowulf] Parallel Programming Question

2009-07-01 Thread Gus Correa
Hi Amjad, list amjad ali wrote: Hi, Gus--thank you. You are right. I mainly have to run programs on a small cluster (GiGE) dedicated for my job only; and sometimes I might get some opportunity to run my code on a shared cluster with hundreds of nodes. Thanks for telling. My guess was not v

Re: [Beowulf] Parallel Programming Question

2009-07-01 Thread David N. Lombard
On Wed, Jul 01, 2009 at 09:10:20AM -0700, Ashley Pittman wrote: > On Fri, 2009-06-26 at 23:30 -0700, Bill Broadley wrote: > > Keep in mind that when you say broadcast that many (not all) MPI > > implementations do not do a true network layer broadcast... and that > > in most > > situations network

Re: [Beowulf] Parallel Programming Question

2009-07-01 Thread Ashley Pittman
On Fri, 2009-06-26 at 23:30 -0700, Bill Broadley wrote: > Keep in mind that when you say broadcast that many (not all) MPI > implementations do not do a true network layer broadcast... and that > in most > situations network uplinks are distinct from the downlinks (except for > the > ACKs). > > If

Re: [Beowulf] Parallel Programming Question

2009-07-01 Thread Bogdan Costescu
On Tue, 30 Jun 2009, Gus Correa wrote: My answers were given in the context of Amjad's original questions Sorry, I missed somehow the context for the questions. Still, the thoughts about I/O programming are general in nature, so they would apply in any case. Hence, he may want to follow th

Re: [Beowulf] Parallel Programming Question

2009-06-30 Thread amjad ali
Hi, Gus--thank you. You are right. I mainly have to run programs on a small cluster (GiGE) dedicated for my job only; and sometimes I might get some opportunity to run my code on a shared cluster with hundreds of nodes. My parallel CFD application involves (In its simplest form): 1) reading of inp

Re: [Beowulf] Parallel Programming Question

2009-06-30 Thread Gus Correa
Hi Bogdan, list Oh, well, this is definitely a peer reviewed list. My answers were given in the context of Amjad's original questions, and the perception, based on Amjad's previous and current postings, that he is not dealing with a large cluster, or with many users, and plans to both paralleliz

Re: [Beowulf] Parallel Programming Question

2009-06-30 Thread Bogdan Costescu
On Wed, 24 Jun 2009, Gus Correa wrote: the "master" processor reads... broadcasts parameters that are used by all "slave" processors, and scatters any data that will be processed in a distributed fashion by each "slave" processor. ... That always works, there is no file system contention. I

Re: [Beowulf] Parallel Programming Question

2009-06-26 Thread Bill Broadley
amjad ali wrote: > Hello all, > > In an mpi parallel code which of the following two is a better way: > > 1) Read the input data from input data files only by the master process > and then broadcast it other processes. > > 2) All the processes read the input data directly from input da

Re: [Beowulf] Parallel Programming Question

2009-06-24 Thread Gus Correa
Hi Amjad, list Mark Hahn said it all: 2 is possible, but only works efficiently under certain conditions. In my experience here, with ocean, atmosphere, and climate models, I've seen parallel programs with both styles of I/O (not only input!). Here is what I can tell about them. #1 is the tradi

Re: [Beowulf] Parallel Programming Question

2009-06-24 Thread Mark Hahn
In an mpi parallel code which of the following two is a better way: 1) Read the input data from input data files only by the master process and then broadcast it other processes. 2) All the processes read the input data directly from input data files (no need of broadcast from the mast

[Beowulf] Parallel Programming Question

2009-06-23 Thread amjad ali
Hello all, In an mpi parallel code which of the following two is a better way: 1) Read the input data from input data files only by the master process and then broadcast it other processes. 2) All the processes read the input data directly from input data files (no need of broadcast fr

Re: [Beowulf] Parallel Programming Question

2009-04-09 Thread Nifty Tom Mitchell
On Thu, Apr 09, 2009 at 08:15:07PM +0500, amjad ali wrote: > > Hello All, > > On my 4-node Beowulf Cluster, when I run my PDE solver code (compiled > with mpif90 of openmpi-installed-with-gfortran) with -np 4 launched > only on the Head Node (without providing -machinefile), it gives me > correct

[Beowulf] Parallel Programming Question

2009-04-09 Thread amjad ali
Hello All, On my 4-node Beowulf Cluster, when I run my PDE solver code (compiled with mpif90 of openmpi-installed-with-gfortran) with -np 4 launched only on the Head Node (without providing -machinefile), it gives me correct results. ONLY one problem is there: when I monitor RAM behavior it gets f

Re: [Beowulf] Parallel Programming Question

2007-07-17 Thread Robert G. Brown
On Tue, 17 Jul 2007, Joe Landman wrote: (The functions are borrowed from the library provided by "Numerical Recipes in C++") I'm currently calling these functions from the main loop with the lines: ah that explains it... Um, "borrowed"? You'd better watch out for the DMCA Police -- the

Re: [Beowulf] Parallel Programming Question

2007-07-17 Thread Joe Landman
Hi James: James Roberson wrote: Hello, I'm new to parallel programming and MPI. I've developed a simulator in C++ for which I would like to decrease the running time by using a Beowulf cluster. I'm not interested in optimizing speed, I'm just looking for a quick and easy way to significantl

[Beowulf] Parallel Programming Question

2007-07-17 Thread James Roberson
Hello, I'm new to parallel programming and MPI. I've developed a simulator in C++ for which I would like to decrease the running time by using a Beowulf cluster. I'm not interested in optimizing speed, I'm just looking for a quick and easy way to significantly improve the speed over running the