On Mon, Nov 18 2019, Drew Parsons <dpars...@debian.org> wrote: > But note that this test only measures the time for loading the h5py > module itself, so it does not provide a good measure of performance > with mpi support available. It's not fair to characterise it as ×2.5 > slower, since this is a once-off cost in CPU time. i.e. the relevant > quantity here is the additional 0.4 sec of time to load the module. > It's a bit of a stretch to say that 0.4 sec is a severe performance > penalty, I think.
Hi, Drew. I would say that 0.4 seconds is a pretty bad performance penalty, actually, for nothing other than importing the library. We have many command line applications that can process hdf5 files, but this poor import performance slows down the applications even if hdf5 file processing is not to be used. If the application itself needs to run fast because e.g. it is used in an optimization loop, then that kind of poor initialization performance can be horrible. A trend like this point to the need to opportunistically load the library only when needed, which is generally quite inconvenient to do. And isn't the MPI inclusion supposed to be a performance *improvement*? If the use of MPI doesn't increase performance by more than three times then it's not actually improving performance for a many applications. But again, we shouldn't penalize people who are not using MPI just to benefit those that are. The base 2.10 import speed illustrate (1/4 second) is already bad, but >3/4 second is way too much imho. Anyway, this should just further bolster the argument that we should not be enabling MPI support by default. jamie.