For the transport, try:

$ mpirun.openmpi -n 2 --mca btl self,tcp ./printf

yay! this worked. My bare bones test code with that runs flawlessly:

gmulas@capitanata:~/PAHmodels/anharmonica-scalapack$ mpiexec.openmpi ---mca btl self,tcp sample_printf
MPI_Init call ok
My rank is = 0
number of procs is = 2

MPI_Init call ok
My rank is = 1
number of procs is = 2

MPI_Finalize call ok, returned 0
MPI_Finalize call ok, returned 0


The same code, run without the --mca option, yields:

gmulas@capitanata:~/PAHmodels/anharmonica-scalapack$ mpiexec.openmpi -n 2 sample_printf -------------------------------------------------------------------------- [[23445,1],1]: A high-performance Open MPI point-to-point messaging module
was unable to find any relevant network interfaces:

Module: OpenFabrics (openib)
  Host: capitanata

Another transport will be used instead, although this may result in
lower performance.

NOTE: You can disable this warning by setting the MCA parameter
btl_base_warn_component_unused to 0.
--------------------------------------------------------------------------

Of course it's a workaround, not a real solution, but way better than
nothing :)

Hi,

Ok, time to dig deeper into btl issues. Can you send me the output of:

$ mpirun -n 2    -mca mpi_show_mca_params all ./sample_printf

thanks
Alastair


thanks!

Giacomo



--
Alastair McKinstry, <alast...@sceal.ie>, <mckins...@debian.org>, 
https://diaspora.sceal.ie/u/amckinstry
Commander Vimes didn’t like the phrase “The innocent have nothing to fear,”
 believing the innocent had everything to fear, mostly from the guilty but in 
the longer term
 even more from those who say things like “The innocent have nothing to fear.”
 - T. Pratchett, Snuff

Reply via email to