Take a look at MatLoad_SeqAIJ_Binary(). You will see how it is easy to
simplify that code to get what you need. You can call
PetscViewerBinaryGetDescriptor() and then use lseek() to skip over all the
integer part of the matrix storage (the row lengths and column indices) and not
even read
How big are these matrices?
The memory PETSc allocates is (8+4)*num_nnz + 4*num_rows bytes (in double
precision)
https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/blob/main/src/mat/impls/aij/seq/aij.h*L47__;Iw!!G_uCfscf7eWS!eQLvMSRayHm9_oWVvaVFu0KZKnXMgFnQ06zPI0ciy6V4RLioHdXqliTOEFJQs8W8
HiThe averaging takes place on the same hardware and as part of the same job as the writing the matrix to file in the c++ code. The number of nodes is determined on the problem size and therefor for a single matrix. Allocating extra nodes for the averaging is not economical, unfortunately.On 22 May
Did you run in MPI parallel? If not, using MPI and running with multiple
compute nodes could solve the problem.
Are all these matrices already on disk? Then you have to pay the I/O cost
for reading the matrices.
--Junchao Zhang
On Thu, May 22, 2025 at 8:21 AM Donald Duck wrote:
> Hello eve
Hello everyone
A piece of c++ code writes PETSc AIJ matrices to binary files. My task is
to compute an average matrix of these AIJ matrices. Therefore I read the
first matrix with petsc4py and then start to add the other matrices to it.
All matrixes always have the same size, shape, nnz etc.
Howe