Hello sir, 

I remove the duplicated "_type", and get the same error and output.

Best regards,
Xiaofeng


> On Jun 19, 2025, at 19:45, Matthew Knepley <knep...@gmail.com> wrote:
> 
> This options is wrong
> 
>   -fieldsplit_0_mg_coarse_sub_pc_type_type svd
> 
> Notice that "_type" is repeated.
> 
>   Thanks,
> 
>       Matt
> 
> On Thu, Jun 19, 2025 at 7:10 AM hexioafeng <hexiaof...@buaa.edu.cn 
> <mailto:hexiaof...@buaa.edu.cn>> wrote:
>> Dear authors,
>> 
>> Here are the options passed with fieldsplit preconditioner:
>> 
>> -ksp_type cg -pc_type fieldsplit -pc_fieldsplit_detect_saddle_point  
>> -pc_fieldsplit_type schur -pc_fieldsplit_schur_precondition selfp 
>> -pc_fieldsplit_schur_fact_type full -fieldsplit_0_ksp_type preonly 
>> -fieldsplit_0_pc_type gamg -fieldsplit_0_mg_coarse_sub_pc_type_type svd 
>> -fieldsplit_1_ksp_type preonly -fieldsplit_1_pc_type bjacobi -ksp_view  
>> -ksp_monitor_true_residual  -ksp_converged_reason  
>> -fieldsplit_0_mg_levels_ksp_monitor_true_residual  
>> -fieldsplit_0_mg_levels_ksp_converged_reason  
>> -fieldsplit_1_ksp_monitor_true_residual  -fieldsplit_1_ksp_converged_reason 
>> 
>> and the output:
>> 
>> 0 KSP unpreconditioned resid norm 2.777777777778e+01 true resid norm 
>> 2.777777777778e+01 ||r(i)||/||b|| 1.000000000000e+00
>>     Linear fieldsplit_0_mg_levels_1_ solve converged due to CONVERGED_ITS 
>> iterations 2
>>     Linear fieldsplit_0_mg_levels_1_ solve converged due to CONVERGED_ITS 
>> iterations 2
>>   Linear fieldsplit_1_ solve did not converge due to DIVERGED_PC_FAILED 
>> iterations 0
>>                  PC failed due to SUBPC_ERROR 
>>     Linear fieldsplit_0_mg_levels_1_ solve converged due to CONVERGED_ITS 
>> iterations 2
>>     Linear fieldsplit_0_mg_levels_1_ solve converged due to CONVERGED_ITS 
>> iterations 2
>> Linear solve did not converge due to DIVERGED_PC_FAILED iterations 0
>>                PC failed due to SUBPC_ERROR 
>> KSP Object: 1 MPI processes
>>   type: cg
>>   maximum iterations=200, initial guess is zero
>>   tolerances:  relative=1e-06, absolute=1e-12, divergence=1e+30
>>   left preconditioning
>>   using UNPRECONDITIONED norm type for convergence test
>> PC Object: 1 MPI processes
>>   type: fieldsplit
>>     FieldSplit with Schur preconditioner, blocksize = 1, factorization FULL
>>     Preconditioner for the Schur complement formed from Sp, an assembled 
>> approximation to S, which uses A00's diagonal's inverse
>>     Split info:
>>     Split number 0 Defined by IS
>>     Split number 1 Defined by IS
>>     KSP solver for A00 block
>>       KSP Object: (fieldsplit_0_) 1 MPI processes
>>         type: preonly
>>         maximum iterations=10000, initial guess is zero
>>         tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>         left preconditioning
>>         using NONE norm type for convergence test
>>       PC Object: (fieldsplit_0_) 1 MPI processes
>>         type: gamg
>>           type is MULTIPLICATIVE, levels=2 cycles=v
>>             Cycles per PCApply=1
>>             Using externally compute Galerkin coarse grid matrices
>>             GAMG specific options
>>               Threshold for dropping small values in graph on each level =   
>>      
>>               Threshold scaling factor for each level not specified = 1.
>>               AGG specific options
>>                 Symmetric graph false
>>                 Number of levels to square graph 1
>>                 Number smoothing steps 1
>>               Complexity:    grid = 1.00222
>>         Coarse grid solver -- level -------------------------------
>>           KSP Object: (fieldsplit_0_mg_coarse_) 1 MPI processes
>>             type: preonly
>>             maximum iterations=10000, initial guess is zero
>>             tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>             left preconditioning
>>             using NONE norm type for convergence test
>>           PC Object: (fieldsplit_0_mg_coarse_) 1 MPI processes
>>             type: bjacobi
>>               number of blocks = 1
>>               Local solver is the same for all blocks, as in the following 
>> KSP and PC objects on rank 0:
>>             KSP Object: (fieldsplit_0_mg_coarse_sub_) 1 MPI processes
>>               type: preonly
>>               maximum iterations=1, initial guess is zero
>>               tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>               left preconditioning
>>               using NONE norm type for convergence test
>>             PC Object: (fieldsplit_0_mg_coarse_sub_) 1 MPI processes
>>               type: lu
>>                 out-of-place factorization
>>                 tolerance for zero pivot 2.22045e-14
>>                 using diagonal shift on blocks to prevent zero pivot 
>> [INBLOCKS]
>>                 matrix ordering: nd
>>                 factor fill ratio given 5., needed 1.
>>                   Factored matrix follows:
>>                     Mat Object: 1 MPI processes
>>                       type: seqaij
>>                       rows=8, cols=8
>>                       package used to perform factorization: petsc
>>                       total: nonzeros=56, allocated nonzeros=56
>>                         using I-node routines: found 3 nodes, limit used is 5
>>               linear system matrix = precond matrix:
>>               Mat Object: 1 MPI processes
>>                 type: seqaij
>>                 rows=8, cols=8
>>                 total: nonzeros=56, allocated nonzeros=56
>>                 total number of mallocs used during MatSetValues calls=0
>>                   using I-node routines: found 3 nodes, limit used is 5
>>             linear system matrix = precond matrix:
>>             Mat Object: 1 MPI processes
>>               type: mpiaij
>>               rows=8, cols=8
>>               total: nonzeros=56, allocated nonzeros=56
>>               total number of mallocs used during MatSetValues calls=0
>>                 using nonscalable MatPtAP() implementation
>>                 using I-node (on process 0) routines: found 3 nodes, limit 
>> used is 5
>>         Down solver (pre-smoother) on level 1 -------------------------------
>>           KSP Object: (fieldsplit_0_mg_levels_1_) 1 MPI processes
>>             type: chebyshev
>>               eigenvalue estimates used:  min = 0.0998145, max = 1.09796
>>               eigenvalues estimate via gmres min 0.00156735, max 0.998145
>>               eigenvalues estimated using gmres with translations  [0. 0.1; 
>> 0. 1.1]
>>               KSP Object: (fieldsplit_0_mg_levels_1_esteig_) 1 MPI processes
>>                 type: gmres
>>                   restart=30, using Classical (unmodified) Gram-Schmidt 
>> Orthogonalization with no iterative refinement
>>                   happy breakdown tolerance 1e-30
>>                 maximum iterations=10, initial guess is zero
>>                 tolerances:  relative=1e-12, absolute=1e-50, 
>> divergence=10000.
>>                 left preconditioning
>>                 using PRECONDITIONED norm type for convergence test
>>               estimating eigenvalues using noisy right hand side
>>             maximum iterations=2, nonzero initial guess
>>             tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>             left preconditioning
>>             using NONE norm type for convergence test
>>           PC Object: (fieldsplit_0_mg_levels_1_) 1 MPI processes
>>             type: sor
>>               type = local_symmetric, iterations = 1, local iterations = 1, 
>> omega = 1.
>>             linear system matrix = precond matrix:
>>             Mat Object: (fieldsplit_0_) 1 MPI processes
>>               type: mpiaij
>>               rows=480, cols=480
>>               total: nonzeros=25200, allocated nonzeros=25200
>>               total number of mallocs used during MatSetValues calls=0
>>                 using I-node (on process 0) routines: found 160 nodes, limit 
>> used is 5
>>         Up solver (post-smoother) same as down solver (pre-smoother)
>>         linear system matrix = precond matrix:
>>         Mat Object: (fieldsplit_0_) 1 MPI processes
>>           type: mpiaij
>>           rows=480, cols=480
>>           total: nonzeros=25200, allocated nonzeros=25200
>>           total number of mallocs used during MatSetValues calls=0
>>             using I-node (on process 0) routines: found 160 nodes, limit 
>> used is 5
>>     KSP solver for S = A11 - A10 inv(A00) A01 
>>       KSP Object: (fieldsplit_1_) 1 MPI processes
>>         type: preonly
>>         maximum iterations=10000, initial guess is zero
>>         tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>         left preconditioning
>>         using NONE norm type for convergence test
>>       PC Object: (fieldsplit_1_) 1 MPI processes
>>         type: bjacobi
>>           number of blocks = 1
>>           Local solver is the same for all blocks, as in the following KSP 
>> and PC objects on rank 0:
>>         KSP Object: (fieldsplit_1_sub_) 1 MPI processes
>>           type: preonly
>>           maximum iterations=10000, initial guess is zero
>>           tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>           left preconditioning
>>           using NONE norm type for convergence test
>>         PC Object: (fieldsplit_1_sub_) 1 MPI processes
>>           type: bjacobi
>>             number of blocks = 1
>>             Local solver is the same for all blocks, as in the following KSP 
>> and PC objects on rank 0:
>>                     KSP Object:           (fieldsplit_1_sub_sub_)           
>> 1 MPI processes
>>                       type: preonly
>>                       maximum iterations=10000, initial guess is zero
>>                       tolerances:  relative=1e-05, absolute=1e-50, 
>> divergence=10000.
>>                       left preconditioning
>>                       using NONE norm type for convergence test
>>                     PC Object:           (fieldsplit_1_sub_sub_)           1 
>> MPI processes
>>                       type: ilu
>>                         out-of-place factorization
>>                         0 levels of fill
>>                         tolerance for zero pivot 2.22045e-14
>>                         matrix ordering: natural
>>                         factor fill ratio given 1., needed 1.
>>                           Factored matrix follows:
>>                             Mat Object:           1 MPI processes
>>                               type: seqaij
>>                               rows=144, cols=144
>>                               package used to perform factorization: petsc
>>                               total: nonzeros=240, allocated nonzeros=240
>>                                 not using I-node routines
>>                       linear system matrix = precond matrix:
>>                       Mat Object:           1 MPI processes
>>                         type: seqaij
>>                         rows=144, cols=144
>>                         total: nonzeros=240, allocated nonzeros=240
>>                         total number of mallocs used during MatSetValues 
>> calls=0
>>                           not using I-node routines
>>           linear system matrix = precond matrix:
>>           Mat Object: 1 MPI processes
>>             type: mpiaij
>>             rows=144, cols=144
>>             total: nonzeros=240, allocated nonzeros=240
>>             total number of mallocs used during MatSetValues calls=0
>>               not using I-node (on process 0) routines
>>         linear system matrix followed by preconditioner matrix:
>>         Mat Object: (fieldsplit_1_) 1 MPI processes
>>           type: schurcomplement
>>           rows=144, cols=144
>>             Schur complement A11 - A10 inv(A00) A01
>>             A11
>>               Mat Object: (fieldsplit_1_) 1 MPI processes
>>                 type: mpiaij
>>                 rows=144, cols=144
>>                 total: nonzeros=240, allocated nonzeros=240
>>                 total number of mallocs used during MatSetValues calls=0
>>                   not using I-node (on process 0) routines
>>             A10
>>               Mat Object: 1 MPI processes
>>                 type: mpiaij
>>                 rows=144, cols=480
>>                 total: nonzeros=48, allocated nonzeros=48
>>                 total number of mallocs used during MatSetValues calls=0
>>                   using I-node (on process 0) routines: found 74 nodes, 
>> limit used is 5
>>             KSP of A00
>>               KSP Object: (fieldsplit_0_) 1 MPI processes
>>                 type: preonly
>>                 maximum iterations=10000, initial guess is zero
>>                 tolerances:  relative=1e-05, absolute=1e-50, 
>> divergence=10000.
>>                 left preconditioning
>>                 using NONE norm type for convergence test
>>               PC Object: (fieldsplit_0_) 1 MPI processes
>>                 type: gamg
>>                   type is MULTIPLICATIVE, levels=2 cycles=v
>>                     Cycles per PCApply=1
>>                     Using externally compute Galerkin coarse grid matrices
>>                     GAMG specific options
>>                       Threshold for dropping small values in graph on each 
>> level =                
>>                       Threshold scaling factor for each level not specified 
>> = 1.
>>                       AGG specific options
>>                         Symmetric graph false
>>                         Number of levels to square graph 1
>>                         Number smoothing steps 1
>>                       Complexity:    grid = 1.00222
>>                 Coarse grid solver -- level -------------------------------
>>                   KSP Object: (fieldsplit_0_mg_coarse_) 1 MPI processes
>>                     type: preonly
>>                     maximum iterations=10000, initial guess is zero
>>                     tolerances:  relative=1e-05, absolute=1e-50, 
>> divergence=10000.
>>                     left preconditioning
>>                     using NONE norm type for convergence test
>>                   PC Object: (fieldsplit_0_mg_coarse_) 1 MPI processes
>>                     type: bjacobi
>>                       number of blocks = 1
>>                       Local solver is the same for all blocks, as in the 
>> following KSP and PC objects on rank 0:
>>                     KSP Object: (fieldsplit_0_mg_coarse_sub_) 1 MPI processes
>>                       type: preonly
>>                       maximum iterations=1, initial guess is zero
>>                       tolerances:  relative=1e-05, absolute=1e-50, 
>> divergence=10000.
>>                       left preconditioning
>>                       using NONE norm type for convergence test
>>                     PC Object: (fieldsplit_0_mg_coarse_sub_) 1 MPI processes
>>                       type: lu
>>                         out-of-place factorization
>>                         tolerance for zero pivot 2.22045e-14
>>                         using diagonal shift on blocks to prevent zero pivot 
>> [INBLOCKS]
>>                         matrix ordering: nd
>>                         factor fill ratio given 5., needed 1.
>>                           Factored matrix follows:
>>                             Mat Object: 1 MPI processes
>>                               type: seqaij
>>                               rows=8, cols=8
>>                               package used to perform factorization: petsc
>>                               total: nonzeros=56, allocated nonzeros=56
>>                                 using I-node routines: found 3 nodes, limit 
>> used is 5
>>                       linear system matrix = precond matrix:
>>                       Mat Object: 1 MPI processes
>>                         type: seqaij
>>                         rows=8, cols=8
>>                         total: nonzeros=56, allocated nonzeros=56
>>                         total number of mallocs used during MatSetValues 
>> calls=0
>>                           using I-node routines: found 3 nodes, limit used 
>> is 5
>>                     linear system matrix = precond matrix:
>>                     Mat Object: 1 MPI processes
>>                       type: mpiaij
>>                       rows=8, cols=8
>>                       total: nonzeros=56, allocated nonzeros=56
>>                       total number of mallocs used during MatSetValues 
>> calls=0
>>                         using nonscalable MatPtAP() implementation
>>                         using I-node (on process 0) routines: found 3 nodes, 
>> limit used is 5
>>                 Down solver (pre-smoother) on level 1 
>> -------------------------------
>>                   KSP Object: (fieldsplit_0_mg_levels_1_) 1 MPI processes
>>                     type: chebyshev
>>                       eigenvalue estimates used:  min = 0.0998145, max = 
>> 1.09796
>>                       eigenvalues estimate via gmres min 0.00156735, max 
>> 0.998145
>>                       eigenvalues estimated using gmres with translations  
>> [0. 0.1; 0. 1.1]
>>                       KSP Object: (fieldsplit_0_mg_levels_1_esteig_) 1 MPI 
>> processes
>>                         type: gmres
>>                           restart=30, using Classical (unmodified) 
>> Gram-Schmidt Orthogonalization with no iterative refinement
>>                           happy breakdown tolerance 1e-30
>>                         maximum iterations=10, initial guess is zero
>>                         tolerances:  relative=1e-12, absolute=1e-50, 
>> divergence=10000.
>>                         left preconditioning
>>                         using PRECONDITIONED norm type for convergence test
>>                       estimating eigenvalues using noisy right hand side
>>                     maximum iterations=2, nonzero initial guess
>>                     tolerances:  relative=1e-05, absolute=1e-50, 
>> divergence=10000.
>>                     left preconditioning
>>                     using NONE norm type for convergence test
>>                   PC Object: (fieldsplit_0_mg_levels_1_) 1 MPI processes
>>                     type: sor
>>                       type = local_symmetric, iterations = 1, local 
>> iterations = 1, omega = 1.
>>                     linear system matrix = precond matrix:
>>                     Mat Object: (fieldsplit_0_) 1 MPI processes
>>                       type: mpiaij
>>                       rows=480, cols=480
>>                       total: nonzeros=25200, allocated nonzeros=25200
>>                       total number of mallocs used during MatSetValues 
>> calls=0
>>                         using I-node (on process 0) routines: found 160 
>> nodes, limit used is 5
>>                 Up solver (post-smoother) same as down solver (pre-smoother)
>>                 linear system matrix = precond matrix:
>>                 Mat Object: (fieldsplit_0_) 1 MPI processes
>>                   type: mpiaij
>>                   rows=480, cols=480
>>                   total: nonzeros=25200, allocated nonzeros=25200
>>                   total number of mallocs used during MatSetValues calls=0
>>                     using I-node (on process 0) routines: found 160 nodes, 
>> limit used is 5
>>             A01
>>               Mat Object: 1 MPI processes
>>                 type: mpiaij
>>                 rows=480, cols=144
>>                 total: nonzeros=48, allocated nonzeros=48
>>                 total number of mallocs used during MatSetValues calls=0
>>                   using I-node (on process 0) routines: found 135 nodes, 
>> limit used is 5
>>         Mat Object: 1 MPI processes
>>           type: mpiaij
>>           rows=144, cols=144
>>           total: nonzeros=240, allocated nonzeros=240
>>           total number of mallocs used during MatSetValues calls=0
>>             not using I-node (on process 0) routines
>>   linear system matrix = precond matrix:
>>   Mat Object: 1 MPI processes
>>     type: mpiaij
>>     rows=624, cols=624
>>     total: nonzeros=25536, allocated nonzeros=25536
>>     total number of mallocs used during MatSetValues calls=0
>>       using I-node (on process 0) routines: found 336 nodes, limit used is 5
>> 
>> 
>> Thanks,
>> Xiaofeng
>> 
>> 
>> 
>>> On Jun 17, 2025, at 19:05, Mark Adams <mfad...@lbl.gov 
>>> <mailto:mfad...@lbl.gov>> wrote:
>>> 
>>> And don't use -pc_gamg_parallel_coarse_grid_solver
>>> You can use that in production but for debugging use -mg_coarse_pc_type svd
>>> Also, use -options_left and remove anything that is not used.
>>> (I am puzzled, I see -pc_type gamg not -pc_type fieldsplit)
>>> 
>>> Mark
>>> 
>>> 
>>> On Mon, Jun 16, 2025 at 6:40 AM Matthew Knepley <knep...@gmail.com 
>>> <mailto:knep...@gmail.com>> wrote:
>>>> On Sun, Jun 15, 2025 at 9:46 PM hexioafeng <hexiaof...@buaa.edu.cn 
>>>> <mailto:hexiaof...@buaa.edu.cn>> wrote:
>>>>> Hello,
>>>>> 
>>>>> Here are the options and outputs:
>>>>> 
>>>>> options:
>>>>> 
>>>>> -ksp_type cg -pc_type gamg -pc_gamg_parallel_coarse_grid_solver  
>>>>> -pc_fieldsplit_detect_saddle_point  -pc_fieldsplit_type schur 
>>>>> -pc_fieldsplit_schur_precondition selfp 
>>>>> -fieldsplit_1_mat_schur_complement_ainv_type lump 
>>>>> -pc_fieldsplit_schur_fact_type full -fieldsplit_0_ksp_type preonly 
>>>>> -fieldsplit_0_pc_type gamg -fieldsplit_0_mg_coarse_pc_type_type svd 
>>>>> -fieldsplit_1_ksp_type preonly -fieldsplit_1_pc_type bjacobi 
>>>>> -fieldsplit_1_sub_pc_type sor -ksp_view  -ksp_monitor_true_residual  
>>>>> -ksp_converged_reason  -fieldsplit_0_mg_levels_ksp_monitor_true_residual  
>>>>> -fieldsplit_0_mg_levels_ksp_converged_reason  
>>>>> -fieldsplit_1_ksp_monitor_true_residual  
>>>>> -fieldsplit_1_ksp_converged_reason 
>>>> 
>>>> This option was wrong:
>>>> 
>>>>   -fieldsplit_0_mg_coarse_pc_type_type svd
>>>> 
>>>> from the output, we can see that it should have been
>>>> 
>>>>   -fieldsplit_0_mg_coarse_sub_pc_type_type svd
>>>> 
>>>>   THanks,
>>>> 
>>>>      Matt
>>>>  
>>>>> output:
>>>>> 
>>>>> 0 KSP unpreconditioned resid norm 2.777777777778e+01 true resid norm 
>>>>> 2.777777777778e+01 ||r(i)||/||b|| 1.000000000000e+00
>>>>> Linear solve did not converge due to DIVERGED_PC_FAILED iterations 0
>>>>>                PC failed due to SUBPC_ERROR
>>>>> KSP Object: 1 MPI processes
>>>>>   type: cg
>>>>>   maximum iterations=200, initial guess is zero
>>>>>   tolerances:  relative=1e-06, absolute=1e-12, divergence=1e+30
>>>>>   left preconditioning
>>>>>   using UNPRECONDITIONED norm type for convergence test
>>>>> PC Object: 1 MPI processes
>>>>>   type: gamg
>>>>>     type is MULTIPLICATIVE, levels=2 cycles=v
>>>>>       Cycles per PCApply=1
>>>>>       Using externally compute Galerkin coarse grid matrices
>>>>>       GAMG specific options
>>>>>         Threshold for dropping small values in graph on each level =
>>>>>         Threshold scaling factor for each level not specified = 1.
>>>>>         AGG specific options
>>>>>           Symmetric graph false
>>>>>           Number of levels to square graph 1
>>>>>           Number smoothing steps 1
>>>>>         Complexity:    grid = 1.00176
>>>>>   Coarse grid solver -- level -------------------------------
>>>>>     KSP Object: (mg_coarse_) 1 MPI processes
>>>>>       type: preonly
>>>>>       maximum iterations=10000, initial guess is zero
>>>>>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>>>>       left preconditioning
>>>>>       using NONE norm type for convergence test
>>>>>     PC Object: (mg_coarse_) 1 MPI processes
>>>>>       type: bjacobi
>>>>>         number of blocks = 1
>>>>>         Local solver is the same for all blocks, as in the following KSP 
>>>>> and PC objects on rank 0:
>>>>>       KSP Object: (mg_coarse_sub_) 1 MPI processes
>>>>>         type: preonly
>>>>>         maximum iterations=1, initial guess is zero
>>>>>         tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>>>>         left preconditioning
>>>>>         using NONE norm type for convergence test
>>>>>       PC Object: (mg_coarse_sub_) 1 MPI processes
>>>>>         type: lu
>>>>>           out-of-place factorization
>>>>>           tolerance for zero pivot 2.22045e-14
>>>>>           using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
>>>>>           matrix ordering: nd
>>>>>           factor fill ratio given 5., needed 1.
>>>>>             Factored matrix follows:
>>>>>               Mat Object: 1 MPI processes
>>>>>                 type: seqaij
>>>>>                 rows=7, cols=7
>>>>>                 package used to perform factorization: petsc
>>>>>                 total: nonzeros=45, allocated nonzeros=45
>>>>>                   using I-node routines: found 3 nodes, limit used is 5
>>>>>         linear system matrix = precond matrix:
>>>>>         Mat Object: 1 MPI processes
>>>>>           type: seqaij
>>>>>           rows=7, cols=7
>>>>>           total: nonzeros=45, allocated nonzeros=45
>>>>>           total number of mallocs used during MatSetValues calls=0
>>>>>             using I-node routines: found 3 nodes, limit used is 5
>>>>>       linear system matrix = precond matrix:
>>>>>       Mat Object: 1 MPI processes
>>>>>         type: mpiaij
>>>>>         rows=7, cols=7
>>>>>         total: nonzeros=45, allocated nonzeros=45
>>>>>         total number of mallocs used during MatSetValues calls=0
>>>>>           using nonscalable MatPtAP() implementation
>>>>>           using I-node (on process 0) routines: found 3 nodes, limit used 
>>>>> is 5
>>>>>   Down solver (pre-smoother) on level 1 -------------------------------
>>>>>     KSP Object: (mg_levels_1_) 1 MPI processes
>>>>>       type: chebyshev
>>>>>         eigenvalue estimates used:  min = 0., max = 0.
>>>>>         eigenvalues estimate via gmres min 0., max 0.
>>>>>         eigenvalues estimated using gmres with translations  [0. 0.1; 0. 
>>>>> 1.1]
>>>>>         KSP Object: (mg_levels_1_esteig_) 1 MPI processes
>>>>>           type: gmres
>>>>>             restart=30, using Classical (unmodified) Gram-Schmidt 
>>>>> Orthogonalization with no iterative refinement
>>>>>             happy breakdown tolerance 1e-30
>>>>>           maximum iterations=10, initial guess is zero
>>>>>           tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>>>>>           left preconditioning
>>>>>           using PRECONDITIONED norm type for convergence test
>>>>>         PC Object: (mg_levels_1_) 1 MPI processes
>>>>>           type: sor
>>>>>             type = local_symmetric, iterations = 1, local iterations = 1, 
>>>>> omega = 1.
>>>>>           linear system matrix = precond matrix:
>>>>>           Mat Object: 1 MPI processes
>>>>>             type: mpiaij
>>>>>             rows=624, cols=624
>>>>>             total: nonzeros=25536, allocated nonzeros=25536
>>>>>             total number of mallocs used during MatSetValues calls=0
>>>>>               using I-node (on process 0) routines: found 336 nodes, 
>>>>> limit used is 5
>>>>>         estimating eigenvalues using noisy right hand side
>>>>>       maximum iterations=2, nonzero initial guess
>>>>>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>>>>       left preconditioning
>>>>>       using NONE norm type for convergence test
>>>>>     PC Object: (mg_levels_1_) 1 MPI processes
>>>>>       type: sor
>>>>>         type = local_symmetric, iterations = 1, local iterations = 1, 
>>>>> omega = 1.      linear system matrix = precond matrix:
>>>>>       Mat Object: 1 MPI processes
>>>>>         type: mpiaij
>>>>>         rows=624, cols=624
>>>>>         total: nonzeros=25536, allocated nonzeros=25536
>>>>>         total number of mallocs used during MatSetValues calls=0
>>>>>           using I-node (on process 0) routines: found 336 nodes, limit 
>>>>> used is 5  Up solver (post-smoother) same as down solver (pre-smoother)
>>>>>   linear system matrix = precond matrix:
>>>>>   Mat Object: 1 MPI processes
>>>>>     type: mpiaij
>>>>>     rows=624, cols=624
>>>>>     total: nonzeros=25536, allocated nonzeros=25536
>>>>>     total number of mallocs used during MatSetValues calls=0
>>>>>       using I-node (on process 0) routines: found 336 nodes, limit used 
>>>>> is 5
>>>>> 
>>>>> 
>>>>> Best regards,
>>>>> 
>>>>> Xiaofeng
>>>>> 
>>>>> 
>>>>>> On Jun 14, 2025, at 07:28, Barry Smith <bsm...@petsc.dev 
>>>>>> <mailto:bsm...@petsc.dev>> wrote:
>>>>>> 
>>>>>> 
>>>>>>   Matt,
>>>>>> 
>>>>>>    Perhaps we should add options -ksp_monitor_debug and 
>>>>>> -snes_monitor_debug that turn on all possible monitoring for the 
>>>>>> (possibly) nested solvers and all of their converged reasons also? Note 
>>>>>> this is not completely trivial because each preconditioner will have to 
>>>>>> supply its list based on the current solver options for it.
>>>>>> 
>>>>>>    Then we won't need to constantly list a big string of problem 
>>>>>> specific monitor options to ask the user to use.
>>>>>> 
>>>>>>   Barry
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> On Jun 13, 2025, at 9:09 AM, Matthew Knepley <knep...@gmail.com 
>>>>>>> <mailto:knep...@gmail.com>> wrote:
>>>>>>> 
>>>>>>> On Thu, Jun 12, 2025 at 10:55 PM hexioafeng <hexiaof...@buaa.edu.cn 
>>>>>>> <mailto:hexiaof...@buaa.edu.cn>> wrote:
>>>>>>>> Dear authors,
>>>>>>>> 
>>>>>>>> I tried -pc_type game -pc_gamg_parallel_coarse_grid_solver and 
>>>>>>>> -pc_type field split -pc_fieldsplit_detect_saddle_point 
>>>>>>>> -fieldsplit_0_ksp_type pronely -fieldsplit_0_pc_type game 
>>>>>>>> -fieldsplit_0_mg_coarse_pc_type sad -fieldsplit_1_ksp_type pronely 
>>>>>>>> -fieldsplit_1_pc_type Jacobi _fieldsplit_1_sub_pc_type for , both 
>>>>>>>> options got the KSP_DIVERGE_PC_FAILED error.
>>>>>>> 
>>>>>>> With any question about convergence, we need to see the output of
>>>>>>> 
>>>>>>>   -ksp_view -ksp_monitor_true_residual -ksp_converged_reason 
>>>>>>> -fieldsplit_0_mg_levels_ksp_monitor_true_residual 
>>>>>>> -fieldsplit_0_mg_levels_ksp_converged_reason 
>>>>>>> -fieldsplit_1_ksp_monitor_true_residual 
>>>>>>> -fieldsplit_1_ksp_converged_reason
>>>>>>> 
>>>>>>> and all the error output.
>>>>>>> 
>>>>>>>   Thanks,
>>>>>>> 
>>>>>>>      Matt
>>>>>>>  
>>>>>>>> Thanks,
>>>>>>>> 
>>>>>>>> Xiaofeng
>>>>>>>> 
>>>>>>>> 
>>>>>>>>> On Jun 12, 2025, at 20:50, Mark Adams <mfad...@lbl.gov 
>>>>>>>>> <mailto:mfad...@lbl.gov>> wrote:
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> On Thu, Jun 12, 2025 at 8:44 AM Matthew Knepley <knep...@gmail.com 
>>>>>>>>> <mailto:knep...@gmail.com>> wrote:
>>>>>>>>>> On Thu, Jun 12, 2025 at 4:58 AM Mark Adams <mfad...@lbl.gov 
>>>>>>>>>> <mailto:mfad...@lbl.gov>> wrote:
>>>>>>>>>>> Adding this to the PETSc mailing list,
>>>>>>>>>>> 
>>>>>>>>>>> On Thu, Jun 12, 2025 at 3:43 AM hexioafeng <hexiaof...@buaa.edu.cn 
>>>>>>>>>>> <mailto:hexiaof...@buaa.edu.cn>> wrote:
>>>>>>>>>>>> 
>>>>>>>>>>>> Dear Professor,
>>>>>>>>>>>> 
>>>>>>>>>>>> I hope this message finds you well.
>>>>>>>>>>>> 
>>>>>>>>>>>> I am an employee at a CAE company and a heavy user of the PETSc 
>>>>>>>>>>>> library. I would like to thank you for your contributions to PETSc 
>>>>>>>>>>>> and express my deep appreciation for your work.
>>>>>>>>>>>> 
>>>>>>>>>>>> Recently, I encountered some difficulties when using PETSc to 
>>>>>>>>>>>> solve structural mechanics problems with Lagrange multiplier 
>>>>>>>>>>>> constraints. After searching extensively online and reviewing 
>>>>>>>>>>>> several papers, I found your previous paper titled "Algebraic 
>>>>>>>>>>>> multigrid methods for constrained linear systems with applications 
>>>>>>>>>>>> to contact problems in solid mechanics" seems to be the most 
>>>>>>>>>>>> relevant and helpful. 
>>>>>>>>>>>> 
>>>>>>>>>>>> The stiffness matrix I'm working with, K, is a block saddle-point 
>>>>>>>>>>>> matrix of the form (A00 A01; A10 0), where A00 is singular—just as 
>>>>>>>>>>>> described in your paper, and different from many other articles . 
>>>>>>>>>>>> I have a few questions regarding your work and would greatly 
>>>>>>>>>>>> appreciate your insights:
>>>>>>>>>>>> 
>>>>>>>>>>>> 1. Is the AMG/KKT method presented in your paper available in 
>>>>>>>>>>>> PETSc? I tried using CG+GAMG directly but received a 
>>>>>>>>>>>> KSP_DIVERGED_PC_FAILED error. I also attempted to use 
>>>>>>>>>>>> CG+PCFIELDSPLIT with the following options:  
>>>>>>>>>>> 
>>>>>>>>>>> No
>>>>>>>>>>>  
>>>>>>>>>>>>    
>>>>>>>>>>>>     -pc_type fieldsplit -pc_fieldsplit_detect_saddle_point 
>>>>>>>>>>>> -pc_fieldsplit_type schur -pc_fieldsplit_schur_precondition selfp 
>>>>>>>>>>>> -pc_fieldsplit_schur_fact_type full -fieldsplit_0_ksp_type preonly 
>>>>>>>>>>>> -fieldsplit_0_pc_type gamg -fieldsplit_1_ksp_type preonly 
>>>>>>>>>>>> -fieldsplit_1_pc_type bjacobi  
>>>>>>>>>>>>    
>>>>>>>>>>>>    Unfortunately, this also resulted in a KSP_DIVERGED_PC_FAILED 
>>>>>>>>>>>> error. Do you have any suggestions?
>>>>>>>>>>>> 
>>>>>>>>>>>> 2. In your paper, you compare the method with Uzawa-type 
>>>>>>>>>>>> approaches. To my understanding, Uzawa methods typically require 
>>>>>>>>>>>> A00 to be invertible. How did you handle the singularity of A00 to 
>>>>>>>>>>>> construct an M-matrix that is invertible?
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> You add a regularization term like A01 * A10 (like springs). See 
>>>>>>>>>>> the paper or any reference to augmented lagrange or Uzawa
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>>> 3. Can i implement the AMG/KKT method in your paper using existing 
>>>>>>>>>>>> AMG APIs? Implementing a production-level AMG solver from scratch 
>>>>>>>>>>>> would be quite challenging for me, so I’m hoping to utilize 
>>>>>>>>>>>> existing AMG interfaces within PETSc or other packages.
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> You can do Uzawa and make the regularization matrix with 
>>>>>>>>>>> matrix-matrix products. Just use AMG for the A00 block.
>>>>>>>>>>> 
>>>>>>>>>>>  
>>>>>>>>>>>> 4. For saddle-point systems where A00 is singular, can you 
>>>>>>>>>>>> recommend any more robust or efficient solutions? Alternatively, 
>>>>>>>>>>>> are you aware of any open-source software packages that can handle 
>>>>>>>>>>>> such cases out-of-the-box?
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> No, and I don't think PETSc can do this out-of-the-box, but others 
>>>>>>>>>>> may be able to give you a better idea of what PETSc can do.
>>>>>>>>>>> I think PETSc can do Uzawa or other similar algorithms but it will 
>>>>>>>>>>> not do the regularization automatically (it is a bit more 
>>>>>>>>>>> complicated than just A01 * A10)
>>>>>>>>>> 
>>>>>>>>>> One other trick you can use is to have
>>>>>>>>>> 
>>>>>>>>>>   -fieldsplit_0_mg_coarse_pc_type svd
>>>>>>>>>> 
>>>>>>>>>> This will use SVD on the coarse grid of GAMG, which can handle the 
>>>>>>>>>> null space in A00 as long as the prolongation does not put it back 
>>>>>>>>>> in. I have used this for the Laplacian with Neumann conditions and 
>>>>>>>>>> for freely floating elastic problems.
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> Good point.
>>>>>>>>> You can also use -pc_gamg_parallel_coarse_grid_solver to get GAMG to 
>>>>>>>>> use a on level iterative solver for the coarse grid.
>>>>>>>>>  
>>>>>>>>>>   Thanks,
>>>>>>>>>> 
>>>>>>>>>>      Matt
>>>>>>>>>>  
>>>>>>>>>>> Thanks,
>>>>>>>>>>> Mark
>>>>>>>>>>>> 
>>>>>>>>>>>> Thank you very much for taking the time to read my email. Looking 
>>>>>>>>>>>> forward to hearing from you.
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> Sincerely,  
>>>>>>>>>>>> 
>>>>>>>>>>>> Xiaofeng He
>>>>>>>>>>>> -----------------------------------------------------
>>>>>>>>>>>> 
>>>>>>>>>>>> Research Engineer
>>>>>>>>>>>> 
>>>>>>>>>>>> Internet Based Engineering, Beijing, China
>>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> -- 
>>>>>>>>>> What most experimenters take for granted before they begin their 
>>>>>>>>>> experiments is infinitely more interesting than any results to which 
>>>>>>>>>> their experiments lead.
>>>>>>>>>> -- Norbert Wiener
>>>>>>>>>> 
>>>>>>>>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZB1cNFj48U8eb5ckEu57heuMiZw2CzbgK4EOR8laI0ulI607DgVUe-wDatJn7gTSAcRZv8Bcw3BmbXZ0gJOgxQicaGoXrw$
>>>>>>>>>>   
>>>>>>>>>> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!f-YJSzthRa7atIa1xs1GPHW53hGIqSenvp1eO2kDsSyf4jv1_Vp0kL9Lg8pyyPeG8al4Im8XlLqGRHw1FxYh$>
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> --
>>>>>>> What most experimenters take for granted before they begin their 
>>>>>>> experiments is infinitely more interesting than any results to which 
>>>>>>> their experiments lead.
>>>>>>> -- Norbert Wiener
>>>>>>> 
>>>>>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZB1cNFj48U8eb5ckEu57heuMiZw2CzbgK4EOR8laI0ulI607DgVUe-wDatJn7gTSAcRZv8Bcw3BmbXZ0gJOgxQicaGoXrw$
>>>>>>>   
>>>>>>> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!f-YJSzthRa7atIa1xs1GPHW53hGIqSenvp1eO2kDsSyf4jv1_Vp0kL9Lg8pyyPeG8al4Im8XlLqGRHw1FxYh$>
>>>>>> 
>>>>> 
>>>> 
>>>> 
>>>> 
>>>> --
>>>> What most experimenters take for granted before they begin their 
>>>> experiments is infinitely more interesting than any results to which their 
>>>> experiments lead.
>>>> -- Norbert Wiener
>>>> 
>>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZB1cNFj48U8eb5ckEu57heuMiZw2CzbgK4EOR8laI0ulI607DgVUe-wDatJn7gTSAcRZv8Bcw3BmbXZ0gJOgxQicaGoXrw$
>>>>   
>>>> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!dYETsi-moODALE1tmLrk5pxFKF9l552nNiC0cBgsCQ9ebugJWHtsNYa0QBS5Gmws9J_VC_Iec3Nx0c1FgNl1$>
>> 
> 
> 
> 
> --
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
> 
> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZB1cNFj48U8eb5ckEu57heuMiZw2CzbgK4EOR8laI0ulI607DgVUe-wDatJn7gTSAcRZv8Bcw3BmbXZ0gJOgxQicaGoXrw$
>   
> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZB1cNFj48U8eb5ckEu57heuMiZw2CzbgK4EOR8laI0ulI607DgVUe-wDatJn7gTSAcRZv8Bcw3BmbXZ0gJOgxQjTnTB2XQ$
>  >

Reply via email to