Hi Bogdan, list
Bogdan Costescu wrote:
On Tue, 30 Jun 2009, Gus Correa wrote:
My answers were given in the context of Amjad's original questions
Sorry, I missed somehow the context for the questions. Still, the
thoughts about I/O programming are general in nature, so they would
apply in an
Hi Amjad, list
amjad ali wrote:
Hi,
Gus--thank you.
You are right. I mainly have to run programs on a small cluster (GiGE)
dedicated for my job only; and sometimes I might get some opportunity to
run my code on a shared cluster with hundreds of nodes.
Thanks for telling.
My guess was not v
On Wed, Jul 01, 2009 at 09:10:20AM -0700, Ashley Pittman wrote:
> On Fri, 2009-06-26 at 23:30 -0700, Bill Broadley wrote:
> > Keep in mind that when you say broadcast that many (not all) MPI
> > implementations do not do a true network layer broadcast... and that
> > in most
> > situations network
On Fri, 2009-06-26 at 23:30 -0700, Bill Broadley wrote:
> Keep in mind that when you say broadcast that many (not all) MPI
> implementations do not do a true network layer broadcast... and that
> in most
> situations network uplinks are distinct from the downlinks (except for
> the
> ACKs).
>
> If
On Tue, 30 Jun 2009, Gus Correa wrote:
My answers were given in the context of Amjad's original questions
Sorry, I missed somehow the context for the questions. Still, the
thoughts about I/O programming are general in nature, so they would
apply in any case.
Hence, he may want to follow th
Hi,
Gus--thank you.
You are right. I mainly have to run programs on a small cluster (GiGE)
dedicated for my job only; and sometimes I might get some opportunity to run
my code on a shared cluster with hundreds of nodes.
My parallel CFD application involves (In its simplest form):
1) reading of inp
Hi Bogdan, list
Oh, well, this is definitely a peer reviewed list.
My answers were given in the context of Amjad's original
questions, and the perception, based on Amjad's previous
and current postings, that he is not dealing with a large cluster,
or with many users, and plans to both paralleliz
On Wed, 24 Jun 2009, Gus Correa wrote:
the "master" processor reads... broadcasts parameters that are used
by all "slave" processors, and scatters any data that will be
processed in a distributed fashion by each "slave" processor.
...
That always works, there is no file system contention.
I
amjad ali wrote:
> Hello all,
>
> In an mpi parallel code which of the following two is a better way:
>
> 1) Read the input data from input data files only by the master process
> and then broadcast it other processes.
>
> 2) All the processes read the input data directly from input da
Hi Amjad, list
Mark Hahn said it all:
2 is possible, but only works efficiently under certain conditions.
In my experience here, with ocean, atmosphere, and climate models,
I've seen parallel programs with both styles of I/O (not only input!).
Here is what I can tell about them.
#1 is the tradi
In an mpi parallel code which of the following two is a better way:
1) Read the input data from input data files only by the master process
and then broadcast it other processes.
2) All the processes read the input data directly from input data files
(no need of broadcast from the mast
Hello all,
In an mpi parallel code which of the following two is a better way:
1) Read the input data from input data files only by the master process
and then broadcast it other processes.
2) All the processes read the input data directly from input data files
(no need of broadcast fr
On Thu, Apr 09, 2009 at 08:15:07PM +0500, amjad ali wrote:
>
> Hello All,
>
> On my 4-node Beowulf Cluster, when I run my PDE solver code (compiled
> with mpif90 of openmpi-installed-with-gfortran) with -np 4 launched
> only on the Head Node (without providing -machinefile), it gives me
> correct
Hello All,
On my 4-node Beowulf Cluster, when I run my PDE solver code (compiled
with mpif90 of openmpi-installed-with-gfortran) with -np 4 launched
only on the Head Node (without providing -machinefile), it gives me
correct results. ONLY one problem is there: when I monitor RAM
behavior it gets f
On Tue, 17 Jul 2007, Joe Landman wrote:
(The functions are borrowed from the library provided by "Numerical Recipes
in C++")
I'm currently calling these functions from the main loop with the lines:
ah that explains it...
Um, "borrowed"? You'd better watch out for the DMCA Police -- the
Hi James:
James Roberson wrote:
Hello,
I'm new to parallel programming and MPI. I've developed a simulator in
C++ for which I would
like to decrease the running time by using a Beowulf cluster. I'm not
interested in optimizing
speed, I'm just looking for a quick and easy way to significantl
Hello,
I'm new to parallel programming and MPI. I've developed a simulator in C++
for which I would
like to decrease the running time by using a Beowulf cluster. I'm not
interested in optimizing
speed, I'm just looking for a quick and easy way to significantly improve
the speed over running
the
17 matches
Mail list logo