Your message dated Sun, 27 Dec 2020 21:48:47 +0100
with message-id <20201227204847.ga24...@xanadu.blop.info>
and subject line bug fixed in openmpi 4.1.0-2
has caused the Debian Bug report #978205,
regarding dune-grid: FTBFS: tests failed
to be marked as done.

This means that you claim that the problem has been dealt with.
If this is not the case it is now your responsibility to reopen the
Bug report if necessary, and/or fix the problem forthwith.

(NB: If you are a system administrator and have no idea what this
message is talking about, this may indicate a serious mail system
misconfiguration somewhere. Please contact ow...@bugs.debian.org
immediately.)


-- 
978205: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=978205
Debian Bug Tracking System
Contact ow...@bugs.debian.org with problems
--- Begin Message ---
Source: dune-grid
Version: 2.7.0-2
Severity: serious
Justification: FTBFS on amd64
Tags: bullseye sid ftbfs
Usertags: ftbfs-20201226 ftbfs-bullseye

Hi,

During a rebuild of all packages in sid, your package failed to build
on amd64.

Relevant part (hopefully):
> make[5]: Entering directory '/<<PKGBUILDDIR>>/build'
> make[5]: Nothing to be done for 'CMakeFiles/build_tests.dir/build'.
> make[5]: Leaving directory '/<<PKGBUILDDIR>>/build'
> [100%] Built target build_tests
> make[4]: Leaving directory '/<<PKGBUILDDIR>>/build'
> /usr/bin/cmake -E cmake_progress_start /<<PKGBUILDDIR>>/build/CMakeFiles 0
> make[3]: Leaving directory '/<<PKGBUILDDIR>>/build'
> make[2]: Leaving directory '/<<PKGBUILDDIR>>/build'
> cd build; PATH=/<<PKGBUILDDIR>>/debian/tmp-test:$PATH /usr/bin/dune-ctest 
>    Site: ip-172-31-2-151
>    Build name: Linux-c++
> Create new tag: 20201226-1837 - Experimental
> Test project /<<PKGBUILDDIR>>/build
>       Start  1: scsgmappertest
>  1/62 Test  #1: scsgmappertest 
> ........................................***Failed    0.04 sec
> [ip-172-31-2-151:01441] [[2655,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01440] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01440] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01440] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start  2: mcmgmappertest
>  2/62 Test  #2: mcmgmappertest 
> ........................................***Failed    0.02 sec
> [ip-172-31-2-151:01443] [[2653,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01442] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01442] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01442] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start  3: conformvolumevtktest
>  3/62 Test  #3: conformvolumevtktest 
> ..................................***Failed    0.02 sec
> [ip-172-31-2-151:01445] [[2651,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01444] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01444] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01444] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start  4: gnuplottest
>  4/62 Test  #4: gnuplottest 
> ...........................................***Failed    0.02 sec
> [ip-172-31-2-151:01447] [[2649,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01446] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01446] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01446] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start  5: nonconformboundaryvtktest
>  5/62 Test  #5: nonconformboundaryvtktest 
> .............................***Failed    0.02 sec
> [ip-172-31-2-151:01449] [[2647,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01448] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01448] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01448] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start  6: printgridtest
>  6/62 Test  #6: printgridtest 
> .........................................***Skipped   0.00 sec
>       Start  7: subsamplingvtktest
>  7/62 Test  #7: subsamplingvtktest 
> ....................................***Failed    0.02 sec
> [ip-172-31-2-151:01453] [[2643,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01452] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01452] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01452] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start  8: vtktest
>  8/62 Test  #8: vtktest 
> ...............................................***Failed    0.02 sec
> [ip-172-31-2-151:01455] [[2641,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01454] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01454] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01454] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start  9: vtktest-mpi-2
>  9/62 Test  #9: vtktest-mpi-2 
> .........................................***Failed    0.01 sec
> [ip-172-31-2-151:01456] [[2638,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> 
>       Start 10: vtksequencetest
> 10/62 Test #10: vtksequencetest 
> .......................................***Failed    0.02 sec
> [ip-172-31-2-151:01458] [[2636,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01457] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01457] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01457] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 11: amirameshtest
> 11/62 Test #11: amirameshtest 
> .........................................***Skipped   0.00 sec
>       Start 12: starcdreadertest
> 12/62 Test #12: starcdreadertest 
> ......................................***Failed    0.02 sec
> [ip-172-31-2-151:01461] [[2635,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01460] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01460] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01460] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 13: gmshtest-onedgrid
> 13/62 Test #13: gmshtest-onedgrid 
> .....................................***Failed    0.02 sec
> [ip-172-31-2-151:01463] [[2633,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01462] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01462] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01462] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 14: gmshtest-uggrid
> 14/62 Test #14: gmshtest-uggrid 
> .......................................***Failed    0.02 sec
> [ip-172-31-2-151:01465] [[2631,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01464] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01464] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01464] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 15: gmshtest-alberta2d
> 15/62 Test #15: gmshtest-alberta2d ....................................   
> Passed    0.04 sec
>       Start 16: gmshtest-alberta3d
> 16/62 Test #16: gmshtest-alberta3d ....................................   
> Passed    0.06 sec
>       Start 17: test-dgf-yasp
> 17/62 Test #17: test-dgf-yasp 
> .........................................***Failed    0.02 sec
> [ip-172-31-2-151:01469] [[2627,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01468] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01468] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01468] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 18: test-dgf-yasp-offset
> 18/62 Test #18: test-dgf-yasp-offset 
> ..................................***Failed    0.02 sec
> [ip-172-31-2-151:01471] [[2625,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01470] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01470] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01470] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 19: test-dgf-oned
> 19/62 Test #19: test-dgf-oned 
> .........................................***Failed    0.02 sec
> [ip-172-31-2-151:01473] [[2623,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01472] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01472] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01472] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 20: test-dgf-alberta
> 20/62 Test #20: test-dgf-alberta ......................................   
> Passed    0.01 sec
>       Start 21: test-dgf-ug
> 21/62 Test #21: test-dgf-ug 
> ...........................................***Failed    0.02 sec
> [ip-172-31-2-151:01476] [[2618,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01475] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01475] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01475] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 22: test-dgf-gmsh-ug
> 22/62 Test #22: test-dgf-gmsh-ug 
> ......................................***Failed    0.02 sec
> [ip-172-31-2-151:01478] [[2616,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01477] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01477] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01477] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 23: geometrygrid-coordfunction-copyconstructor
> 23/62 Test #23: geometrygrid-coordfunction-copyconstructor ............   
> Passed    0.00 sec
>       Start 24: test-geogrid-yaspgrid
> 24/62 Test #24: test-geogrid-yaspgrid 
> .................................***Failed    0.02 sec
> [ip-172-31-2-151:01481] [[2615,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01480] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01480] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01480] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 25: test-geogrid-uggrid
> 25/62 Test #25: test-geogrid-uggrid 
> ...................................***Failed    0.02 sec
> [ip-172-31-2-151:01483] [[2613,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01482] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01482] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01482] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 26: test-gridinfo
> 26/62 Test #26: test-gridinfo 
> .........................................***Failed    0.02 sec
> [ip-172-31-2-151:01485] [[2611,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01484] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01484] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01484] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 27: test-identitygrid
> 27/62 Test #27: test-identitygrid 
> .....................................***Failed    0.02 sec
> [ip-172-31-2-151:01487] [[2609,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01486] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01486] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01486] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 28: test-oned
> 28/62 Test #28: test-oned .............................................   
> Passed    0.01 sec
>       Start 29: test-mcmg-geogrid
> 29/62 Test #29: test-mcmg-geogrid 
> .....................................***Failed    0.02 sec
> [ip-172-31-2-151:01490] [[2604,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01489] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01489] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01489] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 30: testiteratorranges
> 30/62 Test #30: testiteratorranges 
> ....................................***Failed    0.02 sec
> [ip-172-31-2-151:01492] [[2602,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01491] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01491] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01491] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 31: test-hierarchicsearch
> 31/62 Test #31: test-hierarchicsearch 
> .................................***Failed    0.02 sec
> [ip-172-31-2-151:01494] [[2600,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01493] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01493] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01493] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 32: test-ug
> 32/62 Test #32: test-ug 
> ...............................................***Failed    0.02 sec
> [ip-172-31-2-151:01496] [[2598,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01495] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01495] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01495] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 33: test-parallel-ug
> 33/62 Test #33: test-parallel-ug 
> ......................................***Failed    0.02 sec
> [ip-172-31-2-151:01498] [[2596,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01497] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01497] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01497] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 34: test-parallel-ug-mpi-2
> 34/62 Test #34: test-parallel-ug-mpi-2 
> ................................***Failed    0.01 sec
> [ip-172-31-2-151:01499] [[2597,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> 
>       Start 35: test-loadbalancing
> 35/62 Test #35: test-loadbalancing 
> ....................................***Skipped   0.00 sec
>       Start 36: issue-53-uggrid-intersections
> 36/62 Test #36: issue-53-uggrid-intersections 
> .........................***Failed    0.02 sec
> [ip-172-31-2-151:01502] [[2592,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01501] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01501] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01501] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 37: test-alberta
> 37/62 Test #37: test-alberta ..........................................   
> Passed    0.16 sec
>       Start 38: test-alberta-1-1
> 38/62 Test #38: test-alberta-1-1 ......................................   
> Passed    0.01 sec
>       Start 39: test-alberta-1-1-no-deprecated
> 39/62 Test #39: test-alberta-1-1-no-deprecated ........................   
> Passed    0.01 sec
>       Start 40: test-alberta-1-2
> 40/62 Test #40: test-alberta-1-2 ......................................   
> Passed    0.16 sec
>       Start 41: test-alberta-2-2
> 41/62 Test #41: test-alberta-2-2 ......................................   
> Passed    0.16 sec
>       Start 42: test-alberta-generic
> 42/62 Test #42: test-alberta-generic ..................................   
> Passed    0.16 sec
>       Start 43: test-yaspgrid-backuprestore-equidistant
> 43/62 Test #43: test-yaspgrid-backuprestore-equidistant 
> ...............***Failed    0.02 sec
> [ip-172-31-2-151:01510] [[2584,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01509] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01509] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01509] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 44: test-yaspgrid-backuprestore-equidistant-mpi-2
> 44/62 Test #44: test-yaspgrid-backuprestore-equidistant-mpi-2 
> .........***Failed    0.01 sec
> [ip-172-31-2-151:01511] [[2585,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> 
>       Start 45: test-yaspgrid-backuprestore-equidistantoffset
> 45/62 Test #45: test-yaspgrid-backuprestore-equidistantoffset 
> .........***Failed    0.02 sec
> [ip-172-31-2-151:01513] [[2583,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01512] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01512] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01512] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 46: test-yaspgrid-backuprestore-equidistantoffset-mpi-2
> 46/62 Test #46: test-yaspgrid-backuprestore-equidistantoffset-mpi-2 
> ...***Failed    0.01 sec
> [ip-172-31-2-151:01514] [[2580,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> 
>       Start 47: test-yaspgrid-backuprestore-tensor
> 47/62 Test #47: test-yaspgrid-backuprestore-tensor 
> ....................***Failed    0.02 sec
> [ip-172-31-2-151:01516] [[2578,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01515] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01515] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01515] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 48: test-yaspgrid-backuprestore-tensor-mpi-2
> 48/62 Test #48: test-yaspgrid-backuprestore-tensor-mpi-2 
> ..............***Failed    0.01 sec
> [ip-172-31-2-151:01517] [[2579,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> 
>       Start 49: test-yaspgrid-entityshifttable
> 49/62 Test #49: test-yaspgrid-entityshifttable ........................   
> Passed    0.00 sec
>       Start 50: test-yaspgrid-tensorgridfactory
> 50/62 Test #50: test-yaspgrid-tensorgridfactory 
> .......................***Failed    0.02 sec
> [ip-172-31-2-151:01520] [[2574,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01519] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01519] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01519] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 51: test-yaspgrid-tensorgridfactory-mpi-2
> 51/62 Test #51: test-yaspgrid-tensorgridfactory-mpi-2 
> .................***Failed    0.01 sec
> [ip-172-31-2-151:01521] [[2575,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> 
>       Start 52: test-yaspgrid-yaspfactory-1d
> 52/62 Test #52: test-yaspgrid-yaspfactory-1d 
> ..........................***Failed    0.02 sec
> [ip-172-31-2-151:01523] [[2573,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01522] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01522] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01522] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 53: test-yaspgrid-yaspfactory-1d-mpi-2
> 53/62 Test #53: test-yaspgrid-yaspfactory-1d-mpi-2 
> ....................***Failed    0.01 sec
> [ip-172-31-2-151:01524] [[2570,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> 
>       Start 54: test-yaspgrid-yaspfactory-2d
> 54/62 Test #54: test-yaspgrid-yaspfactory-2d 
> ..........................***Failed    0.02 sec
> [ip-172-31-2-151:01526] [[2568,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01525] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01525] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01525] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 55: test-yaspgrid-yaspfactory-2d-mpi-2
> 55/62 Test #55: test-yaspgrid-yaspfactory-2d-mpi-2 
> ....................***Failed    0.01 sec
> [ip-172-31-2-151:01527] [[2569,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> 
>       Start 56: test-yaspgrid-yaspfactory-3d
> 56/62 Test #56: test-yaspgrid-yaspfactory-3d 
> ..........................***Failed    0.02 sec
> [ip-172-31-2-151:01529] [[2567,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01528] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01528] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01528] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 57: test-yaspgrid-yaspfactory-3d-mpi-2
> 57/62 Test #57: test-yaspgrid-yaspfactory-3d-mpi-2 
> ....................***Failed    0.01 sec
> [ip-172-31-2-151:01530] [[2564,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> 
>       Start 58: globalindexsettest
> 58/62 Test #58: globalindexsettest 
> ....................................***Failed    0.02 sec
> [ip-172-31-2-151:01532] [[2562,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01531] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01531] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01531] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 59: persistentcontainertest
> 59/62 Test #59: persistentcontainertest 
> ...............................***Failed    0.02 sec
> [ip-172-31-2-151:01534] [[2560,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01533] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01533] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01533] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 60: structuredgridfactorytest
> 60/62 Test #60: structuredgridfactorytest 
> .............................***Failed    0.02 sec
> [ip-172-31-2-151:01536] [[2558,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01535] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01535] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01535] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 61: tensorgridfactorytest
> 61/62 Test #61: tensorgridfactorytest 
> .................................***Failed    0.02 sec
> [ip-172-31-2-151:01538] [[2556,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01537] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01537] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01537] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
>       Start 62: vertexordertest
> 62/62 Test #62: vertexordertest 
> .......................................***Failed    0.02 sec
> [ip-172-31-2-151:01540] [[2554,0],0] ORTE_ERROR_LOG: Not found in file 
> ../../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line 320
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_pmix_base_select failed
>   --> Returned value Not found (-13) instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> [ip-172-31-2-151:01539] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 716
> [ip-172-31-2-151:01539] [[INVALID],INVALID] ORTE_ERROR_LOG: Unable to start a 
> daemon on the local node in file 
> ../../../../../../orte/mca/ess/singleton/ess_singleton_module.c at line 172
> --------------------------------------------------------------------------
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   orte_ess_init failed
>   --> Returned value Unable to start a daemon on the local node (-127) 
> instead of ORTE_SUCCESS
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: ompi_rte_init failed
>   --> Returned "Unable to start a daemon on the local node" (-127) instead of 
> "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
> ***    and potentially your MPI job)
> [ip-172-31-2-151:01539] Local abort before MPI_INIT completed completed 
> successfully, but am not able to aggregate error messages, and not able to 
> guarantee that all other processes were killed!
> 
> 
> 24% tests passed, 47 tests failed out of 62
> 
> Total Test time (real) =   1.60 sec
> 
> The following tests did not run:
>         6 - printgridtest (Skipped)
>        11 - amirameshtest (Skipped)
>        35 - test-loadbalancing (Skipped)
> 
> The following tests FAILED:
>         1 - scsgmappertest (Failed)
>         2 - mcmgmappertest (Failed)
>         3 - conformvolumevtktest (Failed)
>         4 - gnuplottest (Failed)
>         5 - nonconformboundaryvtktest (Failed)
>         7 - subsamplingvtktest (Failed)
>         8 - vtktest (Failed)
>         9 - vtktest-mpi-2 (Failed)
>        10 - vtksequencetest (Failed)
>        12 - starcdreadertest (Failed)
>        13 - gmshtest-onedgrid (Failed)
>        14 - gmshtest-uggrid (Failed)
>        17 - test-dgf-yasp (Failed)
>        18 - test-dgf-yasp-offset (Failed)
>        19 - test-dgf-oned (Failed)
>        21 - test-dgf-ug (Failed)
>        22 - test-dgf-gmsh-ug (Failed)
>        24 - test-geogrid-yaspgrid (Failed)
>        25 - test-geogrid-uggrid (Failed)
>        26 - test-gridinfo (Failed)
>        27 - test-identitygrid (Failed)
>        29 - test-mcmg-geogrid (Failed)
>        30 - testiteratorranges (Failed)
>        31 - test-hierarchicsearch (Failed)
>        32 - test-ug (Failed)
>        33 - test-parallel-ug (Failed)
>        34 - test-parallel-ug-mpi-2 (Failed)
>        36 - issue-53-uggrid-intersections (Failed)
>        43 - test-yaspgrid-backuprestore-equidistant (Failed)
>        44 - test-yaspgrid-backuprestore-equidistant-mpi-2 (Failed)
>        45 - test-yaspgrid-backuprestore-equidistantoffset (Failed)
>        46 - test-yaspgrid-backuprestore-equidistantoffset-mpi-2 (Failed)
>        47 - test-yaspgrid-backuprestore-tensor (Failed)
>        48 - test-yaspgrid-backuprestore-tensor-mpi-2 (Failed)
>        50 - test-yaspgrid-tensorgridfactory (Failed)
>        51 - test-yaspgrid-tensorgridfactory-mpi-2 (Failed)
>        52 - test-yaspgrid-yaspfactory-1d (Failed)
>        53 - test-yaspgrid-yaspfactory-1d-mpi-2 (Failed)
>        54 - test-yaspgrid-yaspfactory-2d (Failed)
>        55 - test-yaspgrid-yaspfactory-2d-mpi-2 (Failed)
>        56 - test-yaspgrid-yaspfactory-3d (Failed)
>        57 - test-yaspgrid-yaspfactory-3d-mpi-2 (Failed)
>        58 - globalindexsettest (Failed)
>        59 - persistentcontainertest (Failed)
>        60 - structuredgridfactorytest (Failed)
>        61 - tensorgridfactorytest (Failed)
>        62 - vertexordertest (Failed)
> Errors while running CTest
> ======================================================================
> Name:      scsgmappertest
> FullName:  ./dune/grid/common/test/scsgmappertest
> Status:    FAILED
> 
> ======================================================================
> Name:      mcmgmappertest
> FullName:  ./dune/grid/common/test/mcmgmappertest
> Status:    FAILED
> 
> ======================================================================
> Name:      conformvolumevtktest
> FullName:  ./dune/grid/io/file/test/conformvolumevtktest
> Status:    FAILED
> 
> ======================================================================
> Name:      gnuplottest
> FullName:  ./dune/grid/io/file/test/gnuplottest
> Status:    FAILED
> 
> ======================================================================
> Name:      nonconformboundaryvtktest
> FullName:  ./dune/grid/io/file/test/nonconformboundaryvtktest
> Status:    FAILED
> 
> ======================================================================
> Name:      subsamplingvtktest
> FullName:  ./dune/grid/io/file/test/subsamplingvtktest
> Status:    FAILED
> 
> ======================================================================
> Name:      vtktest
> FullName:  ./dune/grid/io/file/test/vtktest
> Status:    FAILED
> 
> ======================================================================
> Name:      vtktest-mpi-2
> FullName:  ./dune/grid/io/file/test/vtktest-mpi-2
> Status:    FAILED
> 
> ======================================================================
> Name:      vtksequencetest
> FullName:  ./dune/grid/io/file/test/vtksequencetest
> Status:    FAILED
> 
> ======================================================================
> Name:      starcdreadertest
> FullName:  ./dune/grid/io/file/test/starcdreadertest
> Status:    FAILED
> 
> ======================================================================
> Name:      gmshtest-onedgrid
> FullName:  ./dune/grid/io/file/test/gmshtest-onedgrid
> Status:    FAILED
> 
> ======================================================================
> Name:      gmshtest-uggrid
> FullName:  ./dune/grid/io/file/test/gmshtest-uggrid
> Status:    FAILED
> 
> ======================================================================
> Name:      test-dgf-yasp
> FullName:  ./dune/grid/io/file/dgfparser/test/test-dgf-yasp
> Status:    FAILED
> 
> ======================================================================
> Name:      test-dgf-yasp-offset
> FullName:  ./dune/grid/io/file/dgfparser/test/test-dgf-yasp-offset
> Status:    FAILED
> 
> ======================================================================
> Name:      test-dgf-oned
> FullName:  ./dune/grid/io/file/dgfparser/test/test-dgf-oned
> Status:    FAILED
> 
> ======================================================================
> Name:      test-dgf-ug
> FullName:  ./dune/grid/io/file/dgfparser/test/test-dgf-ug
> Status:    FAILED
> 
> ======================================================================
> Name:      test-dgf-gmsh-ug
> FullName:  ./dune/grid/io/file/dgfparser/test/test-dgf-gmsh-ug
> Status:    FAILED
> 
> ======================================================================
> Name:      test-geogrid-yaspgrid
> FullName:  ./dune/grid/test/test-geogrid-yaspgrid
> Status:    FAILED
> 
> ======================================================================
> Name:      test-geogrid-uggrid
> FullName:  ./dune/grid/test/test-geogrid-uggrid
> Status:    FAILED
> 
> ======================================================================
> Name:      test-gridinfo
> FullName:  ./dune/grid/test/test-gridinfo
> Status:    FAILED
> 
> ======================================================================
> Name:      test-identitygrid
> FullName:  ./dune/grid/test/test-identitygrid
> Status:    FAILED
> 
> ======================================================================
> Name:      test-mcmg-geogrid
> FullName:  ./dune/grid/test/test-mcmg-geogrid
> Status:    FAILED
> 
> ======================================================================
> Name:      testiteratorranges
> FullName:  ./dune/grid/test/testiteratorranges
> Status:    FAILED
> 
> ======================================================================
> Name:      test-hierarchicsearch
> FullName:  ./dune/grid/test/test-hierarchicsearch
> Status:    FAILED
> 
> ======================================================================
> Name:      test-ug
> FullName:  ./dune/grid/test/test-ug
> Status:    FAILED
> 
> ======================================================================
> Name:      test-parallel-ug
> FullName:  ./dune/grid/test/test-parallel-ug
> Status:    FAILED
> 
> ======================================================================
> Name:      test-parallel-ug-mpi-2
> FullName:  ./dune/grid/test/test-parallel-ug-mpi-2
> Status:    FAILED
> 
> ======================================================================
> Name:      issue-53-uggrid-intersections
> FullName:  ./dune/grid/test/issue-53-uggrid-intersections
> Status:    FAILED
> 
> ======================================================================
> Name:      test-yaspgrid-backuprestore-equidistant
> FullName:  ./dune/grid/test/yasp/test-yaspgrid-backuprestore-equidistant
> Status:    FAILED
> 
> ======================================================================
> Name:      test-yaspgrid-backuprestore-equidistant-mpi-2
> FullName:  ./dune/grid/test/yasp/test-yaspgrid-backuprestore-equidistant-mpi-2
> Status:    FAILED
> 
> ======================================================================
> Name:      test-yaspgrid-backuprestore-equidistantoffset
> FullName:  ./dune/grid/test/yasp/test-yaspgrid-backuprestore-equidistantoffset
> Status:    FAILED
> 
> ======================================================================
> Name:      test-yaspgrid-backuprestore-equidistantoffset-mpi-2
> FullName:  
> ./dune/grid/test/yasp/test-yaspgrid-backuprestore-equidistantoffset-mpi-2
> Status:    FAILED
> 
> ======================================================================
> Name:      test-yaspgrid-backuprestore-tensor
> FullName:  ./dune/grid/test/yasp/test-yaspgrid-backuprestore-tensor
> Status:    FAILED
> 
> ======================================================================
> Name:      test-yaspgrid-backuprestore-tensor-mpi-2
> FullName:  ./dune/grid/test/yasp/test-yaspgrid-backuprestore-tensor-mpi-2
> Status:    FAILED
> 
> ======================================================================
> Name:      test-yaspgrid-tensorgridfactory
> FullName:  ./dune/grid/test/yasp/test-yaspgrid-tensorgridfactory
> Status:    FAILED
> 
> ======================================================================
> Name:      test-yaspgrid-tensorgridfactory-mpi-2
> FullName:  ./dune/grid/test/yasp/test-yaspgrid-tensorgridfactory-mpi-2
> Status:    FAILED
> 
> ======================================================================
> Name:      test-yaspgrid-yaspfactory-1d
> FullName:  ./dune/grid/test/yasp/test-yaspgrid-yaspfactory-1d
> Status:    FAILED
> 
> ======================================================================
> Name:      test-yaspgrid-yaspfactory-1d-mpi-2
> FullName:  ./dune/grid/test/yasp/test-yaspgrid-yaspfactory-1d-mpi-2
> Status:    FAILED
> 
> ======================================================================
> Name:      test-yaspgrid-yaspfactory-2d
> FullName:  ./dune/grid/test/yasp/test-yaspgrid-yaspfactory-2d
> Status:    FAILED
> 
> ======================================================================
> Name:      test-yaspgrid-yaspfactory-2d-mpi-2
> FullName:  ./dune/grid/test/yasp/test-yaspgrid-yaspfactory-2d-mpi-2
> Status:    FAILED
> 
> ======================================================================
> Name:      test-yaspgrid-yaspfactory-3d
> FullName:  ./dune/grid/test/yasp/test-yaspgrid-yaspfactory-3d
> Status:    FAILED
> 
> ======================================================================
> Name:      test-yaspgrid-yaspfactory-3d-mpi-2
> FullName:  ./dune/grid/test/yasp/test-yaspgrid-yaspfactory-3d-mpi-2
> Status:    FAILED
> 
> ======================================================================
> Name:      globalindexsettest
> FullName:  ./dune/grid/utility/test/globalindexsettest
> Status:    FAILED
> 
> ======================================================================
> Name:      persistentcontainertest
> FullName:  ./dune/grid/utility/test/persistentcontainertest
> Status:    FAILED
> 
> ======================================================================
> Name:      structuredgridfactorytest
> FullName:  ./dune/grid/utility/test/structuredgridfactorytest
> Status:    FAILED
> 
> ======================================================================
> Name:      tensorgridfactorytest
> FullName:  ./dune/grid/utility/test/tensorgridfactorytest
> Status:    FAILED
> 
> ======================================================================
> Name:      vertexordertest
> FullName:  ./dune/grid/utility/test/vertexordertest
> Status:    FAILED
> 
> JUnit report for CTest results written to 
> /<<PKGBUILDDIR>>/build/junit/cmake.xml
> make[1]: *** [/usr/share/dune/dune-debian.mk:39: override_dh_auto_test] Error 
> 1

The full build log is available from:
   http://qa-logs.debian.net/2020/12/26/dune-grid_2.7.0-2_unstable.log

A list of current common problems and possible solutions is available at
http://wiki.debian.org/qa.debian.org/FTBFS . You're welcome to contribute!

If you reassign this bug to another package, please marking it as 'affects'-ing
this package. See https://www.debian.org/Bugs/server-control#affects

If you fail to reproduce this, please provide a build log and diff it with me
so that we can identify if something relevant changed in the meantime.

About the archive rebuild: The rebuild was done on EC2 VM instances from
Amazon Web Services, using a clean, minimal and up-to-date chroot. Every
failed build was retried once to eliminate random failures.

--- End Message ---
--- Begin Message ---
Hi,

This failure was caused by a bug in openmpi , fixed in openmpi 4.1.0-2.
so I'm closing this bug.

Lucas

--- End Message ---

Reply via email to