Package: mpi4py Version: 2.0.0-2 Severity: important Hi,
mpi4py FTBFS on hurd and kfreebsd, blocking building of several packages dependent packages. https://buildd.debian.org/status/fetch.php?pkg=mpi4py&arch=hurd-i386&ver=2.0.0-2%2Bb1&stamp=1473623691 https://buildd.debian.org/status/fetch.php?pkg=mpi4py&arch=kfreebsd-i386&ver=2.0.0-2&stamp=1473761554 https://buildd.debian.org/status/fetch.php?pkg=mpi4py&arch=kfreebsd-amd64&ver=2.0.0-2&stamp=1473761937 [...] debian/rules override_dh_auto_test make[1]: Entering directory '/«PKGBUILDDIR»' pyversions: missing X(S)-Python-Version in control file, fall back to debian/pyversions pyversions: missing debian/pyversions file, fall back to supported versions py3versions: no X-Python3-Version in control file, using supported versions set -e; for v in 2.7 3.5; do \ echo "I: testing using python$v"; \ PYTHONPATH=`/bin/ls -d /«PKGBUILDDIR»/build/lib.*-$v` \ /usr/bin/python$v /usr/bin/nosetests -v --exclude='testPackUnpackExternal' -A "not network" ; \ done I: testing using python2.7 [fano:75254] PMIX ERROR: UNREACHABLE in file src/client/pmix_client.c at line 983 [fano:75255] PMIX ERROR: NOT-SUPPORTED in file src/server/pmix_server_listener.c at line 540 [fano:75254] PMIX ERROR: UNREACHABLE in file src/client/pmix_client.c at line 199 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): init pmix failed --> Returned value Unreachable (-12) instead of ORTE_SUCCESS -------------------------------------------------------------------------- -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_ess_init failed --> Returned value Unreachable (-12) instead of ORTE_SUCCESS -------------------------------------------------------------------------- -------------------------------------------------------------------------- It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): ompi_mpi_init: ompi_rte_init failed --> Returned "Unreachable" (-12) instead of "Success" (0) -------------------------------------------------------------------------- *** An error occurred in MPI_Init_thread *** on a NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, *** and potentially your MPI job) [fano:75254] Local abort before MPI_INIT completed completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed! debian/rules:89: recipe for target 'override_dh_auto_test' failed make[1]: *** [override_dh_auto_test] Error 1 make[1]: Leaving directory '/«PKGBUILDDIR»' debian/rules:22: recipe for target 'build-arch' failed make: *** [build-arch] Error 2 Andreas