Re: [RFC] gfortran's coarray (library version): configure/build and the testsuite

2011-04-05 Thread Jorge D';ELIA
Hello Tobias,

Here there are few comments from my college Lisandro Dalcin,
an external developer of PETSc, e.g. see

http://www.mcs.anl.gov/petsc/petsc-as/miscellaneous/index.html

Regards,
Jorge.


- Mensaje original -
> De: "Tobias Burnus" 
> Para: "gfortran" , "GCC Mailing List" , 
> "Ralf Wildenhues"
> , "Rainer Orth" 
> Enviados: Martes, 5 de Abril 2011 17:01:03
> Asunto: [RFC] gfortran's coarray (library version): configure/build and the 
> testsuite
>
> 
> Fortran 2008 has a build in parallelization (Coarray [Fortran], CAF)
> [1]. gfortran did the first steps to a communication-library version
> [2]. The library will be based MPI.
> 
> There are two issues I like to discuss in this email:
> 
> a) configuring and building
> b) Test-suite support
> 
> Let's start with (b) which is more important for me. 
>
> The current scheme is that the user somehow compiles 
> the communication library (libcaf) [2] and then builds 
> and links doing something like:
>
> mpif90 -fcoarray=lib fortran.f90 -lcaf_mpi
>
> or alternatively
>
> gfortran -fcoarray=lib fortran.f90 -lcaf_mpi
> -I/usr/lib64/mpi/gcc/openmpi/include
> -L/usr/lib64/mpi/gcc/openmpi/lib64
> -lmpi
>
> with some -I, -L -l are added. 
> (Cf. "mpif90 -show" of some MPI implementations.) 
>
> The resulting program is then run using, e.g.,
>
> mpiexec -n 3 ./a.out
>
> Alternatively, it could be just "-lcaf_single" 
> which is run like normal ("./a.out").
> 
> Thus, one needs some means to add link and compile 
> options - and a means to add an (optional) run 
> command. 
>
> Those one would probably pass via environment variables.
> 
> One would then either only run the tests if the 
> environment variable is set - or if "libcaf_single.a" is 
> installed by default (cf. below), one could default to 
> linking that version if no environment variable is
> set.
>
> Then "make check-gfortran" could then always run the 
> CAF library checks - otherwise, only conditionally.
> 
> What do you think? 
>
> Do you have comments how this should be done? 
>
> I also wouldn't mind if someone could help as I am not 
> really comfortable with the test-suite setup nor do I 
> know Lisp well.
> 
> Regarding (a): As mentioned above, one could consider 
> compiling, linking and installing "libcaf_single" by 
> default. libgfortran/caf/single.c is a simple stub 
> library which essentially does nothing; the only purpose
> is to be able to (re)link -fcoarray=lib compile program 
> without recompiling and for testing and debugging 
> purpose (e.g. for the testsuite?). 
>
> If one wants to seriously use a serial program: 
> -fcoarray=single produces much faster code.


COMMENT: Installing libcaf_single is definitely a good idea.


> Additionally, there is libgfortran/caf/mpi.c which is 
> an MPI implementation in a very early stage. 


COMMENT: BTW, I think you need to review your 
implementation of _gfortran_caf_finalize() ... 
You should check with MPI_Finalized(), and 
call MPI_Finalize() ONLY if _gfortran_caf_init() 
actually called MPI_Init(). These are just good-
citizen rules for libraries using MPI.


> (Currently, MPI v2 is required; however, the plan is to 
> move to an MPI v1 implementation - maybe using
> optionally also MPI v2.)
> 
> Thus, the first question is: 
>
> Should one build and install single.c (libcaf_single.a) 
> by default? 


COMMENT: I think so.


> (Might also relate to (a), namely how the test suite 
> is handled.)
> 
> And the second question is: 
>
> Should one be able to configure and build mpi.c 
> (libcaf_mpi.a) by some means? 
>
> I think users interested in could also do the procedure 
> of [2] - or let their admin do it. 
>
> (For Linux distributions one would run into the problem 
> that they typically offer several MPI implementations, 
> e.g. Open MPI and MPICH2, which couldn't be handled 
> that way.)


COMMENT: But they have mechanisms in place to let sysadmins 
(by using altenatives) or regular users (by using environment 
modules) to switch MPI implementations. 
Linux distros would ship two versions of the libcaf_mpi.a 
and the alternatives/modules will let you make a choice.


> (In any case, only static libraries should be created; 
> the libraries could then be installed 
> in $PREFIX/$lib/gcc/$target/$version/, where
> already libgcc.a etc. are located.)


COMMENT: well, using shared libraries would certainly help 
users to switch the underlying MPI implementation at runtime. 
This is an feature should be considered.


> [1] http://gcc.gnu.org/wiki/Coarray
> [2] http://gcc.gnu.org/wiki/CoarrayLib
> 
> 
> PS: At some point there will be also a shared-memory version 
> - maybe for GCC 4.8.
> 
> 
> PPS: Tiny example program - to be compiled 
> with -fcoarray=single -- or
> with -fcoarray=lib as described in [2]:
> 
> program coarray_example
> print *, 'This is image ', this_image(), ' of ', num_images()
> end program coarray_example


-- 
Lisandro Dalcin
---
CIMEC (INTEC/CONICET-UNL)
Predio CONICET-Santa Fe
Colec

Re: [RFC] gfortran's coarray (library version): configure/build and the testsuite

2011-04-11 Thread Jorge D';ELIA
- Mensaje original -
> De: "Ralf Wildenhues" 
> Para: "Jorge D'ELIA" 
> CC: "Tobias Burnus" , 
> "gfortran" , 
> "GCC Mailing List" , 
> "Rainer Orth" CeBiTec.Uni-Bielefeld.DE>
> Enviados: Sábado, 9 de Abril 2011 6:34:05
> Asunto: Re: [RFC] gfortran's coarray (library version): 
> configure/build and the testsuite
>
> Hello,
> 
> * Jorge D'ELIA wrote on Wed, Apr 06, 2011 at 01:24:58AM CEST:
> > Here there are few comments from my college Lisandro Dalcin,
> > an external developer of PETSc, e.g. see
> 
> > - Mensaje original -
> > > The current scheme is that the user somehow compiles
> > > the communication library (libcaf) [2] and then builds
> > > and links doing something like:
> > >
> > > mpif90 -fcoarray=lib fortran.f90 -lcaf_mpi
> > >
> > > or alternatively
> > >
> > > gfortran -fcoarray=lib fortran.f90 -lcaf_mpi
> > > -I/usr/lib64/mpi/gcc/openmpi/include
> > > -L/usr/lib64/mpi/gcc/openmpi/lib64
> > > -lmpi
> > >
> > > with some -I, -L -l are added.
> > > (Cf. "mpif90 -show" of some MPI implementations.)
> > >
> > > The resulting program is then run using, e.g.,
> > >
> > > mpiexec -n 3 ./a.out
> > >
> > > Alternatively, it could be just "-lcaf_single"
> > > which is run like normal ("./a.out").
> 
> I think one of the most important things is that you allow to override
> both the running of mpif90 and the mpiexec commands, so as to allow
> batch environments (qsub, llrun). Although, parsing of output might
> need to be more complex in that case, too. But if only to allow for
> different mpiexec.* incarnations this would be good.
> 
> > > (In any case, only static libraries should be created;
> > > the libraries could then be installed
> > > in $PREFIX/$lib/gcc/$target/$version/, where
> > > already libgcc.a etc. are located.)
> >
> > COMMENT: well, using shared libraries would certainly help
> > users to switch the underlying MPI implementation at runtime.
> > This is an feature should be considered.
> 
> The MPI implementations I know all have pairwise incompatible ABIs,
> prohibiting any kind of switching at run time. If anything, GCC
> might consider making it easy to build and install in parallel the
> library for multiple MPI implementations.
> 
> Cheers,
> Ralf


Hi Ralf,


It is clear that the present discussion is due to that 
the MPI's binaries are not compatibles yet among the MPI 
implementations.

To ensure that the compiler is as neutral as possible 
with respect to an end user, we should see how to 
overcome that problem.

Among other possibilities we consider 5 options, from 
the simplest to more elaborates:

1) A static library "libcaf_mpi.a", that was proposed previously 
in another thread. It uses a specific MPI implementation when 
the library is built (e.g. openmpi or mpich). This possibility 
is the simplest one. However, it only works with the MPI 
distribution available when the library is built.

2) A dynamic library "libcaf_mpi.so" that uses a specific MPI
implementation (e.g. openmpi or mpich) when the library is 
built. However, again, it only works with the MPI implementation 
available when the library is built. From an usability point of 
view, this option is not too much different than the previous 
one.

3) A symbolic link "libcaf_mpi.so" points to "libcaf_mpich.so" 
or "libcaf_openmpi.so". The sysadmin can manage the link using 
tools like "alternatives". Linux distributions usually take care
of this infrastructure by themselves, no more work required
from Gfortran side. However, regular (non-root) users cannot 
switch the backend MPI. This only available on POSIX systems.

4) Different dynamic libraries named "libcaf_mpi.so" are 
built for each MPI implementation, but they are installed
in differente directories, e.g.  

/mpich/libcaf_mpi.so or 
/openmpi/libcaf_mpi.so.

By using the "modules" tool, users can select the preferred
MPI implementation (e.g. "module load mpich2-x86_64" in 
Fedora). This works by adding entries in the LD_LIBRARY_PATH 
enviroment variable. Linux distributions usually take care 
of this infrastructure by themselves, no more work required
from Gfortran side. Regular users are able to choose
the prefered MPI implementation, the dynamic linker loads
the appropiate "libcaf_mpi.so" .

5) A dynamic library a "libcaf_mpi.so" built using the 
dlopen() tool. This option could be practical if the number 
of MPI functions would be no too large (B