Hi Praveen --

In addition to Jeff's good advice on timing the computation you care 
about, I wanted to point out a difference between the MPI and the Chapel 
code:

As you know, MPI is designed to be a distributed memory execution model, 
so to take advantage of the four cores on your Mac, you use mpirun -np 4.

Chapel supports both shared- and distributed-memory parallelism, so the 
way you're running on this 4-core Mac is reasonable, yet different than 
the MPI.  Specifically, we will create a single process that will use 
multiple threads to implement your forall loops (typically 4).  So there 
will be no inter-process communication in the Chapel implementation as 
there is in the MPI version and comparing against an OpenMP implementation 
would be a more fair comparison.

Related: The use of the 'StencilDist' domain map has no positive impact 
for a shared-memory execution like this, and will likely add overhead. It 
is designed for use on distributed-memory executions that do stencil-based 
computations in order to enable caching of values owned by neighboring 
processes.  But when you've only got one process like this, there's no 
remote data to cache.  So for a shared-memory execution like this, it'd be 
interesting to see how much faster the code would be if the 'dmapped 
StencilDist' clause was commented out (in practice, we often write codes 
that can be compiled with or without distributed data using a 'param' 
conditional -- for example, see the declarations of 'Elems' and 'Nodes' in 
examples/benchmarks/lulesh.chpl).

Running on a distributed memory system using the 'StencilDist' 
distribution against MPI (or better, vs. an MPI + OpenMP code) would also 
be more of an apples-to-apples comparison, though I suspect you'll see 
Chapel fall further behind in terms of performance at that point...

-Brad


On Mon, 10 Oct 2016, Jeff Hammond wrote:

> Don't take HPC-oriented performance timings with "time ./program", as that
> measures all sorts of things that have nothing to do with application
> performance.  You should time the "while t < Tf" loops in both cases.
>
> Jeff
>
> On Mon, Oct 10, 2016 at 6:30 AM, Praveen C <[email protected]> wrote:
>>
>> Dear all
>>
>> I am trying out Chapel with some simple example to see how it fares with
> MPI.
>>
>> I solve a linear advection equation in 2-d using Chapel and Petsc with a
> finite volume method. The Petsc code makes use of Petsc vectors to handle
> the MPI communication. I dont really solve any matrix problem.
>>
>> I have these two codes here
>>
>> https://github.com/cpraveen/cfdlab/tree/master/chapel/convect2d
>> https://github.com/cpraveen/cfdlab/tree/master/petsc/convect2d
>>
>> Both codes are solving the same problem with same algorithm. I run these
> codes on my 4-core macbook pro laptop as follows.
>>
>> Chapel 1.4
>>
>> time ./convect2d --n=100 --Tf=10.0 --cfl=0.4 --si=100000
>>
>>
>> 3532  9.99
>>
>> 3533  9.99283
>>
>> 3534  9.99566
>>
>> 3535  9.99849
>>
>> 3536  10.0
>>
>>
>> real 0m4.451s
>>
>> user 0m15.767s
>>
>> sys 0m0.599s
>>
>>
>> Petsc 2.7.3 (MPI)
>>
>> time mpirun -np 4 ./convect -Tf 10.0 -da_grid_x 100 -da_grid_y 100 -cfl
> 0.4 -si 100000
>>
>>
>> it, t = 3532, 9.990005
>>
>> it, t = 3533, 9.992833
>>
>> it, t = 3534, 9.995661
>>
>> it, t = 3535, 9.998490
>>
>> it, t = 3536, 10.000000
>>
>>
>> real 0m1.677s
>>
>> user 0m6.370s
>>
>> sys 0m0.242s
>>
>>
>> The Petsc (MPI) code is about 2.5 times faster than Chapel.
>>
>> I would like to know if I am making comparison in a fair manner ? Am I
> using the best optimization flags ? These are present in the makefiles.
>>
>> With MPI, I used 4 processors but I did not specify anything with Chapel.
> Is this a fair way to compare them ? If not what should be done ?
>>
>> Thanks
>> praveen
>>
>>
> ------------------------------------------------------------------------------
>> Check out the vibrant tech community on one of the world's most
>> engaging tech sites, SlashDot.org! http://sdm.link/slashdot
>> _______________________________________________
>> Chapel-users mailing list
>> [email protected]
>> https://lists.sourceforge.net/lists/listinfo/chapel-users
>>
>
>
>
> --
> Jeff Hammond
> [email protected]
> http://jeffhammond.github.io/
>

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most 
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
Chapel-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/chapel-users

Reply via email to