I am unable to understand what possibly went wrong with my code, I could
load a matrix (large sparse matrix) into petsc, write it out and read it
back into Matlab but when I tried to use MatView to see the matrix-info, it
produces error of some 'corrupt argument, #valgrind'. Can anyone please
help?
We would need more information about "hanging". Do PETSc examples and tiny
problems "hang" on multiple nodes? If you run with -info what are the last
messages printed? Can you run with a debugger to see where it is "hanging"?
> On Aug 9, 2023, at 5:59 PM, Ng, Cho-Kuen wrote:
>
> Barry and
Barry and Matt,
Thanks for your help. Now I can use petsc GPU backend on Perlmutter: 1 node, 4
MPI tasks and 4 GPUs. However, I ran into problems with multiple nodes: 2
nodes, 8 MPI tasks and 8 GPUs. The run hung on KSPSolve. How can I fix this?
Best,
Cho
From:
Dear petsc devs
We have noticed a performance regression using GAMG as the
preconditioner to solve the velocity block in a Stokes equations saddle
point system with variable viscosity solved on a 3D hexahedral mesh of a
spherical shell using Q2-Q1 elements. This is comparing performance from
On Wed, Aug 9, 2023 at 11:09 AM Ilya Fursov
wrote:
> Hello,
>
> I have a problem running src/snes/tutorials/ex17.c in parallel,
> given the specific runtime options (these options are actually taken from
> the test example ex17_3d_q3_trig_elas).
>
> *The serial version works fine:*
> ./ex17 -dm_p
TSRK is an explicit solver. Unless you are changing the ts type from
command line, the explicit jacobian should not be needed. On top of
Barry's suggestion, I would suggest you to write the explicit RHS instead
of assembly a throw away matrix every time that function needs to be
sampled.
On Wed,
Was PETSc built with debugging turned off; so ./configure --with-debugging=0 ?
Can you run with the equivalent of -log_view to get information about the
time spent in the various operations and send that information. The data
generated is the best starting point for determining where the c
Hi all,
I'm currently trying to convert a quantum simulation from scipy to
PETSc. The problem itself is extremely simple and of the form \dot{u}(t)
= (A_const + f(t)*B_const)*u(t), where f(t) in this simple test case is
a square function. The matrices A_const and B_const are extremely sparse
Hello,
I have a problem running src/snes/tutorials/ex17.c in parallel,
given the specific runtime options (these options are actually taken from
the test example ex17_3d_q3_trig_elas).
*The serial version works fine:*
./ex17 -dm_plex_box_faces 1,1,1 -sol_type elas_trig -dm_plex_dim 3
-dm_plex_sim