Hi,
I am trying to view a matrix object when running a job in parallel. The 
application is BOUT++ 
(https://urldefense.us/v3/__https://boutproject.github.io/__;!!G_uCfscf7eWS!Z-f_HrmUaFAwAK4e2pBspL2oRFtfq9O3GtRCoS_Nb1e9y5j2Vjq6THJTnoot7XiMeRAxlNhskW0QrMV7x_v_VsBXVAsGHRrIifxc$
 ) which uses petsc as solver.
While the option -mat_view gives a jumbled output on STDOUT from across the 
processors, redirecting output to a specific file results in race condition. 
The only option that works for me is -mat_view binary with 
-viewer_binary_filename <name>. I am processing this binary file with petsc4py 
as follows:

from petsc4py import PETSc

import numpy as np

import matplotlib.pyplot as plt

import glob


def read_binary(filename):
    viewer = PETSc.Viewer().createMPIIO(filename, "r", comm=PETSc.COMM_WORLD)
    A = PETSc.Mat().load(viewer)
    A.assemble()
    print("Matrix type:", A.getType())
    print("Matrix size:", A.getSize())
    info = A.getInfo()
    print("Number of nonzeros:", info['nz_used'])
    print("Nonzeros allocated:", info['nz_allocated'])

    m, n = A.getSize()
    dense_mat = np.zeros((m, n), dtype=np.float64)
    for i in range(m):
        cols, vals = A.getRow(i)
        dense_mat[i, cols] = vals


    rstart, rend = A.getOwnershipRange()
    rows, cols = [], []
    for i in range(rstart, rend):
        cols_i, _ = A.getRow(i)
        rows.extend([i] * len(cols_i))
        cols.extend(cols_i)

    rows = np.array(rows)
    cols = np.array(cols)

    return dense_mat, rows, cols

A, rows, cols  = read_binary(“matrix”)

This yields accurate number of non-zeros and matrix sparsity. However, the 
dense matrix turns out to be a null matrix. I wonder what is going wrong here.

Thank you, best regards
Arun Pillai

Reply via email to