Great, Thank you very much for the help. I was absolutely not looking into
that direction. Working great again now :)

> I looked into the IO issue, one hint is found with
>
> export OMPI_MCA_io_base_verbose=40
> mpirun -np 1 -H localhost:1 ./mpitest
> ...
>  mca: base: components_open: found loaded component ompio
>  mca: base: components_open: component ompio open function successful
>  mca: base: components_open: found loaded component romio321
>  mca: base: components_open: component romio321 open function successful
> ...
>
> So in fact there are two IO components available: ompio and romio321. The
> first is selected (and fails).  If you select the second, mpitest works:
>
> $ mpirun -np 1 -H localhost:1 --mca io romio321 ./mpitest
> This is process 1 / 1
>
> You can make this permanent with
>
> export OMPI_MCA_io=romio321
>
> added to your login scripts.
>
> HTH.  (OpenMPI is too complicated for it's own good.)
>
>
> John
>
>
>
> On 2021-02-03 09:51, [email protected] wrote:
>>
>> Hi Martin,
>>
>> I haven't run into your MPI_File_open issue (don't use it), but
>> your code does fail for me too in the same way.
>>
>>> $> mpirun -np 1 -H localhost:1 ./fmpitest
>>> fmpitest:/usr/local/lib/libmpi.so.5.0: ./fmpitest : WARNING:
>>>     symbol(mpi_fortran_statuses_ignore_) size mismatch,
>>>     relink your program
>>> fmpitest:/usr/local/lib/libmpi.so.5.0: ./fmpitest : WARNING:
>>>     symbol(mpi_fortran_status_ignore_) size mismatch,
>>>     relink your program
>>
>> The Fortran symbol error you see is common and I'm not sure of the
>> cause.  I did look into it at one point and decided all definitions were
>> in fact identical, so it might be a weird compiler+linker issue.
>>
>> It's never been symptomatic beyond the warning so I ignore it.
>>
>>
>> --John
>

Reply via email to