[R-pkg-devel] Bug? Makevars ignores FC=mpif90

2015-08-10 Thread Ignacio Martinez
Hi,

I'm learning to develop R packages that use Fortran (and MPI). I recently
asked a question on stack-overflow and I was told to contact this list with
a followup question.

When I add to src/Makevars the follwoing line

FC=mpif90

Rstudio ignores it when I click `Build and Reload`.  But, if I create a
Makevars file in a .R/ directory in my$HOME directory with that line,
Rstudio uses mpif90.

Is the fact that R is ignoring the src/Makevars a bug? If not, am I doing
something wrong?

Thanks,

Ignacio


PS: I hope this is the right place to ask this question.

[[alternative HTML version deleted]]

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Bug? Makevars ignores FC=mpif90

2015-08-10 Thread Ignacio Martinez
Thanks peter.

1. Does it say anywhere that the build process should use src/Makevars?
I'm not sure, where should it say that? I'm using the default options in
Rstudio to create a package

2. Also notice that an important part of the package concept is that
packages can be distributed.
The package that i want to build uses MPI. Is that a problem? My
understanding is that in order to use MPI i have to set FC to mpif90. Is
that wrong?

3. build configuration belongs in the build environment
I'm not sure if I follow. What is the correct way of telling Rstudio that
the package i'm building uses MPI?

Thanks again!

PS: This is the link
<http://stackoverflow.com/questions/31924487/tell-rstudio-to-use-mpif90-instead-of-gfortran>to
my question in stack-overflow



On Mon, Aug 10, 2015 at 1:41 PM peter dalgaard  wrote:

>
> > On 10 Aug 2015, at 19:25 , Ignacio Martinez  wrote:
> >
> > Hi,
> >
> > I'm learning to develop R packages that use Fortran (and MPI). I recently
> > asked a question on stack-overflow and I was told to contact this list
> with
> > a followup question.
> >
> > When I add to src/Makevars the follwoing line
> >
> > FC=mpif90
> >
> > Rstudio ignores it when I click `Build and Reload`.  But, if I create a
> > Makevars file in a .R/ directory in my$HOME directory with that line,
> > Rstudio uses mpif90.
> >
> > Is the fact that R is ignoring the src/Makevars a bug? If not, am I doing
> > something wrong?
> >
>
> Ask differently: Does it say anywhere that the build process should use
> src/Makevars?
>
> Also notice that an important part of the package concept is that packages
> can be distributed. The recipient's Fortran compiler is not necessarily
> "mpif90". So, generally speaking, build configuration belongs in the build
> environment, not in the package.
>
> -pd
>
> > Thanks,
> >
> > Ignacio
> >
> >
> > PS: I hope this is the right place to ask this question.
> >
> >   [[alternative HTML version deleted]]
> >
> > __
> > R-package-devel@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-package-devel
>
> --
> Peter Dalgaard, Professor,
> Center for Statistics, Copenhagen Business School
> Solbjerg Plads 3, 2000 Frederiksberg, Denmark
> Phone: (+45)38153501
> Email: pd@cbs.dk  Priv: pda...@gmail.com
>
>
>
>
>
>
>
>
>

[[alternative HTML version deleted]]

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


[R-pkg-devel] not resolved from current namespace error

2015-08-11 Thread Ignacio Martinez
I'm trying to create a package that uses a MPI Fortran module. I have a working
version  of that package that uses a
Fortran module without MPI.

When I run the function `FMPIpi(DARTS = 5000, ROUNDS = 100, cores=2)` I get
the following errors:

> FMPIpi(DARTS = 5000, ROUNDS = 100, cores=2)
2 slaves are spawned successfully. 0 failed.
master (rank 0, comm 1) of size 3 is running on: 2d60fd60575b
slave1 (rank 1, comm 1) of size 3 is running on: 2d60fd60575b
slave2 (rank 2, comm 1) of size 3 is running on: 2d60fd60575b
Error in .Fortran("MPIpi", avepi = as.numeric(1), DARTS =
as.integer(DARTS),  :
 "mpipi" not resolved from current namespace (MyPi)

It looks like something is wrong in my NAMESPACE. This is what I have there:

export(Pibenchmark)
export(Fpi)
export(FMPIpi)
export(Rpi)
useDynLib(MyPi)
exportPattern("^[[:alpha:]]+")
This is my Fortran module:

Module Fortranpi
USE MPI
IMPLICIT NONE
contains
subroutine dboard(darts, dartsscore)
 integer, intent(in):: darts
 double precision, intent(out)  :: dartsscore
 double precision   :: x_coord, y_coord
 integer:: score, n

score = 0
do n = 1, darts
 call random_number(x_coord)
 call random_number(y_coord)

 if ((x_coord**2 + y_coord**2) <= 1.0d0) then
 score = score + 1
 end if
end do

dartsscore = 4.0d0*score/darts

end subroutine dboard

subroutine pi(avepi, DARTS, ROUNDS) bind(C, name="pi_")
 use, intrinsic :: iso_c_binding, only : c_double,
c_int
 real(c_double), intent(out)::  avepi
 integer(c_int), intent(in) ::  DARTS, ROUNDS
 integer::  MASTER, rank, i, n
 integer, allocatable   ::  seed(:)
 double precision   ::  pi_est, homepi, pirecv, pisum

! we set it to zero in the sequential run
rank = 0
! initialize the random number generator
! we make sure the seed is different for each task
call random_seed()
call random_seed(size = n)
allocate(seed(n))
seed = 12 + rank*11
call random_seed(put=seed(1:n))
deallocate(seed)

avepi = 0
do i = 0, ROUNDS-1
 call dboard(darts, pi_est)
 ! calculate the average value of pi over all iterations
 avepi = ((avepi*i) + pi_est)/(i + 1)
end do
end subroutine pi


subroutine MPIpi(avepi, DARTS, ROUNDS) bind(C, name="MPIpi_")
use, intrinsic :: iso_c_binding, only : c_double,
c_int
real(c_double), intent(out):: avepi
integer(c_int), intent(in) :: DARTS, ROUNDS
integer:: i, n, mynpts, ierr, numprocs,
proc_num
integer, allocatable   :: seed(:)
double precision   :: pi_est, y, sumpi

 call mpi_init(ierr)
 call mpi_comm_size(MPI_COMM_WORLD, numprocs, ierr)
 call mpi_comm_rank(MPI_COMM_WORLD, proc_num, ierr)

 if (numprocs .eq. 0) then
mynpts = ROUNDS - (numprocs-1)*(ROUNDS/numprocs)
 else
mynpts = ROUNDS/numprocs
 endif

 ! initialize the random number generator
 ! we make sure the seed is different for each task
 call random_seed()
 call random_seed(size = n)
 allocate(seed(n))
 seed = 12 + proc_num*11
 call random_seed(put=seed(1:n))
 deallocate(seed)

 y=0.0d0
do i = 1, mynpts
call dboard(darts, pi_est)
y = y + pi_est
 end do

 call mpi_reduce(y, sumpi, 1, mpi_double_precision, mpi_sum, 0, &
 mpi_comm_world, ierr)
 if (proc_num==0) avepi = sumpi/ROUNDS
 call mpi_finalize(ierr)
end subroutine MPIpi

end module Fortranpi
and this is my R function

#'@export
FMPIpi <- function(DARTS, ROUNDS, cores) {
 Rmpi::mpi.spawn.Rslaves(nslaves=cores)
 retvals <- .Fortran("MPIpi", avepi = as.numeric(1), DARTS =
 as.integer(DARTS), ROUNDS =  as.integer(ROUNDS))
 return(retvals$avepi)
}

What am I doing wrong?

Thanks a lot!

Ignacio

[[alternative HTML version deleted]]

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] not resolved from current namespace error

2015-08-11 Thread Ignacio Martinez
renaming did the trick! Now I can create the library but the parallel part
is not working (things are much slower in parallel). I will try to figure
that out next.

Is this a good list to ask R + Fortran + MPI questions or should I go to a
different place?

Thanks!

On Tue, Aug 11, 2015 at 2:21 PM Tom Wainwright 
wrote:

> Not sure, but your problem might be answered in the .Fortran() help page:
>
> All Fortran compilers known to be usable to compile R map symbol names to
>> lower case, and so does .Fortran.
>>
>
> I've been caught by that before, and found that using all lowercase names
> for Fortran routines in R is safest.
>
> Tom Wainwright
>
> 
> The contents of this message are mine personally and do not
> necessarily reflect any position of the Government or the
> National Oceanic and Atmospheric Administration.
> ~~~~~~~~
>
>
> On Tue, Aug 11, 2015 at 10:32 AM, Ignacio Martinez 
> wrote:
>
>> I'm trying to create a package that uses a MPI Fortran module. I have a
>> working
>>
> version <https://github.com/ignacio82/MyPi> of that package that uses a
>
>
>> Fortran module without MPI.
>>
>> When I run the function `FMPIpi(DARTS = 5000, ROUNDS = 100, cores=2)` I
>> get
>> the following errors:
>>
>> > FMPIpi(DARTS = 5000, ROUNDS = 100, cores=2)
>> 2 slaves are spawned successfully. 0 failed.
>> master (rank 0, comm 1) of size 3 is running on: 2d60fd60575b
>> slave1 (rank 1, comm 1) of size 3 is running on: 2d60fd60575b
>> slave2 (rank 2, comm 1) of size 3 is running on: 2d60fd60575b
>> Error in .Fortran("MPIpi", avepi = as.numeric(1), DARTS =
>> as.integer(DARTS),  :
>>  "mpipi" not resolved from current namespace (MyPi)
>>
>> It looks like something is wrong in my NAMESPACE. This is what I have
>> there:
>>
>> export(Pibenchmark)
>> export(Fpi)
>> export(FMPIpi)
>> export(Rpi)
>> useDynLib(MyPi)
>> exportPattern("^[[:alpha:]]+")
>> This is my Fortran module:
>>
>> Module Fortranpi
>> USE MPI
>> IMPLICIT NONE
>> contains
>> subroutine dboard(darts, dartsscore)
>>  integer, intent(in):: darts
>>  double precision, intent(out)  :: dartsscore
>>  double precision   :: x_coord, y_coord
>>  integer:: score, n
>>
>> score = 0
>> do n = 1, darts
>>  call random_number(x_coord)
>>  call random_number(y_coord)
>>
>>  if ((x_coord**2 + y_coord**2) <= 1.0d0) then
>>  score = score + 1
>>  end if
>> end do
>>
>> dartsscore = 4.0d0*score/darts
>>
>> end subroutine dboard
>>
>> subroutine pi(avepi, DARTS, ROUNDS) bind(C, name="pi_")
>>  use, intrinsic :: iso_c_binding, only : c_double,
>> c_int
>>  real(c_double), intent(out)::  avepi
>>  integer(c_int), intent(in) ::  DARTS, ROUNDS
>>  integer::  MASTER, rank, i, n
>>  integer, allocatable   ::  seed(:)
>>  double precision   ::  pi_est, homepi, pirecv, pisum
>>
>> ! we set it to zero in the sequential run
>> rank = 0
>> ! initialize the random number generator
>> ! we make sure the seed is different for each task
>> call random_seed()
>> call random_seed(size = n)
>> allocate(seed(n))
>> seed = 12 + rank*11
>> call random_seed(put=seed(1:n))
>> deallocate(seed)
>>
>> avepi = 0
>> do i = 0, ROUNDS-1
>>  call dboard(darts, pi_est)
>>  ! calculate the average value of pi over all iterations
>>  avepi = ((avepi*i) + pi_est)/(i + 1)
>> end do
>> end subroutine pi
>>
>>
>> subroutine MPIpi(avepi, DARTS, ROUNDS) bind(C, name="MPIpi_")
>> use, intrinsic :: iso_c_binding, only : c_double,
>> c_int
>> real(c_double), intent(out):: avepi
>> integer(c_int), intent(in) :: DARTS, ROUNDS
>> integer:: i, n, mynpts, ierr, numprocs,
>> proc_num
>> integer, allocatable   :: seed(:)
>> double precision   :: pi_est, y, sumpi
>>
>>  call mpi_init(ierr)
>>  call mpi_comm_size(MPI_COMM_WORLD, numprocs, ierr)
>>  call mpi_comm_rank(MPI_COMM_WORLD, proc_num, ierr)
>>
>>  if (numprocs .eq. 0) then
>> mynpts = ROUNDS - (numprocs-1)*(ROUNDS/numprocs)
>>  else
>> mynpts