You need to call MatZeroRows() once; passing all the rows you want zeroed, instead of once for each row.
If you are running in parallel each MPI process should call MatZeroRows() once passing in a list of rows to be zeroed. Each process can pass in different rows than the other processes. BTW: You do not need to call MatAssemblyBegin/End() after MatZeroRows() Barry > On Nov 29, 2024, at 9:56 PM, Qiyue Lu <qiyue...@gmail.com> wrote: > > Hello, > In the MPI context, after assembling the distributed matrix A (matmpiaij) and > the right-hand-side b, I am trying to apply the 1st kind boundary condition > using MatZeroRows() and VecSetValues(), for A and b respectively. > The pseudo-code is: > ========= > for (int key = 0; key < BCNodes_Length; key++){ > // retrieving the global row position > pos = BCNodes[key]; > // Set all elements in that row 0 except the one on the diagonal to be > 1.0 > MatZeroRows(A, 1, &pos, 1.0, NULL, NULL); > } > MatAssemblyBegin(A, MAT_FINAL_ASSEMBLY); > MatAssemblyEnd(A, MAT_FINAL_ASSEMBLY); > ========= > > For BCNodes_Length = 10^4, the FOR loop timing is 8 seconds. > For BCNodes_Length = 15*10^4, the FOR loop timing is 3000 seconds. > I am using two computational nodes and each having 12 cores. > > My questions are: > 1) Is the timing plausible? Is the MatZeroRows() function so costly? > 2) Any suggestions to apply the 1st kind boundary conditions for a better > performance? > > Thanks, > Qiyue Lu