Did you run in MPI parallel?   If not, using MPI and running with multiple
compute nodes could solve the problem.

Are all these matrices already on disk?  Then you have to pay the I/O cost
for reading the matrices.

--Junchao Zhang


On Thu, May 22, 2025 at 8:21 AM Donald Duck <superdduc...@gmail.com> wrote:

> Hello everyone
>
> A piece of c++ code writes PETSc AIJ matrices to binary files. My task is
> to compute an average matrix of these AIJ matrices. Therefore I read the
> first matrix with petsc4py and then start to add the other matrices to it.
> All matrixes always have the same size, shape, nnz etc.
>
> However in some cases these matrices are large and only one of it fits
> into memory and the reading/writing takes a significant amout of time. Is
> there a way to read it row by row to prevent memory overflow?
>
> I'm also open for other suggestions, thanks in advance!
>
> Raphael
>

Reply via email to