Hi

The averaging takes place on the same hardware and as part of the same job as the writing the matrix to file in the c++ code. The number of nodes is determined on the problem size and therefor for a single matrix. Allocating extra nodes for the averaging is not economical, unfortunately.

On 22 May 2025, at 16:52, Junchao Zhang <junchao.zh...@gmail.com> wrote:


Did you run in MPI parallel?   If not, using MPI and running with multiple compute nodes could solve the problem.

Are all these matrices already on disk?  Then you have to pay the I/O cost for reading the matrices. 

--Junchao Zhang


On Thu, May 22, 2025 at 8:21 AM Donald Duck <superdduc...@gmail.com> wrote:
Hello everyone

A piece of c++ code writes PETSc AIJ matrices to binary files. My task is to compute an average matrix of these AIJ matrices. Therefore I read the first matrix with petsc4py and then start to add the other matrices to it. All matrixes always have the same size, shape, nnz etc.

However in some cases these matrices are large and only one of it fits into memory and the reading/writing takes a significant amout of time. Is there a way to read it row by row to prevent memory overflow?

I'm also open for other suggestions, thanks in advance!

Raphael

Reply via email to