Hello,

Here are the options and outputs:

options:

-ksp_type cg -pc_type gamg -pc_gamg_parallel_coarse_grid_solver  
-pc_fieldsplit_detect_saddle_point  -pc_fieldsplit_type schur 
-pc_fieldsplit_schur_precondition selfp 
-fieldsplit_1_mat_schur_complement_ainv_type lump 
-pc_fieldsplit_schur_fact_type full -fieldsplit_0_ksp_type preonly 
-fieldsplit_0_pc_type gamg -fieldsplit_0_mg_coarse_pc_type_type svd 
-fieldsplit_1_ksp_type preonly -fieldsplit_1_pc_type bjacobi 
-fieldsplit_1_sub_pc_type sor -ksp_view  -ksp_monitor_true_residual  
-ksp_converged_reason  -fieldsplit_0_mg_levels_ksp_monitor_true_residual  
-fieldsplit_0_mg_levels_ksp_converged_reason  
-fieldsplit_1_ksp_monitor_true_residual  -fieldsplit_1_ksp_converged_reason 

output:

0 KSP unpreconditioned resid norm 2.777777777778e+01 true resid norm 
2.777777777778e+01 ||r(i)||/||b|| 1.000000000000e+00
Linear solve did not converge due to DIVERGED_PC_FAILED iterations 0
               PC failed due to SUBPC_ERROR
KSP Object: 1 MPI processes
  type: cg
  maximum iterations=200, initial guess is zero
  tolerances:  relative=1e-06, absolute=1e-12, divergence=1e+30
  left preconditioning
  using UNPRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
  type: gamg
    type is MULTIPLICATIVE, levels=2 cycles=v
      Cycles per PCApply=1
      Using externally compute Galerkin coarse grid matrices
      GAMG specific options
        Threshold for dropping small values in graph on each level =
        Threshold scaling factor for each level not specified = 1.
        AGG specific options
          Symmetric graph false
          Number of levels to square graph 1
          Number smoothing steps 1
        Complexity:    grid = 1.00176
  Coarse grid solver -- level -------------------------------
    KSP Object: (mg_coarse_) 1 MPI processes
      type: preonly
      maximum iterations=10000, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using NONE norm type for convergence test
    PC Object: (mg_coarse_) 1 MPI processes
      type: bjacobi
        number of blocks = 1
        Local solver is the same for all blocks, as in the following KSP and PC 
objects on rank 0:
      KSP Object: (mg_coarse_sub_) 1 MPI processes
        type: preonly
        maximum iterations=1, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
        left preconditioning
        using NONE norm type for convergence test
      PC Object: (mg_coarse_sub_) 1 MPI processes
        type: lu
          out-of-place factorization
          tolerance for zero pivot 2.22045e-14
          using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
          matrix ordering: nd
          factor fill ratio given 5., needed 1.
            Factored matrix follows:
              Mat Object: 1 MPI processes
                type: seqaij
                rows=7, cols=7
                package used to perform factorization: petsc
                total: nonzeros=45, allocated nonzeros=45
                  using I-node routines: found 3 nodes, limit used is 5
        linear system matrix = precond matrix:
        Mat Object: 1 MPI processes
          type: seqaij
          rows=7, cols=7
          total: nonzeros=45, allocated nonzeros=45
          total number of mallocs used during MatSetValues calls=0
            using I-node routines: found 3 nodes, limit used is 5
      linear system matrix = precond matrix:
      Mat Object: 1 MPI processes
        type: mpiaij
        rows=7, cols=7
        total: nonzeros=45, allocated nonzeros=45
        total number of mallocs used during MatSetValues calls=0
          using nonscalable MatPtAP() implementation
          using I-node (on process 0) routines: found 3 nodes, limit used is 5
  Down solver (pre-smoother) on level 1 -------------------------------
    KSP Object: (mg_levels_1_) 1 MPI processes
      type: chebyshev
        eigenvalue estimates used:  min = 0., max = 0.
        eigenvalues estimate via gmres min 0., max 0.
        eigenvalues estimated using gmres with translations  [0. 0.1; 0. 1.1]
        KSP Object: (mg_levels_1_esteig_) 1 MPI processes
          type: gmres
            restart=30, using Classical (unmodified) Gram-Schmidt 
Orthogonalization with no iterative refinement
            happy breakdown tolerance 1e-30
          maximum iterations=10, initial guess is zero
          tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
          left preconditioning
          using PRECONDITIONED norm type for convergence test
        PC Object: (mg_levels_1_) 1 MPI processes
          type: sor
            type = local_symmetric, iterations = 1, local iterations = 1, omega 
= 1.
          linear system matrix = precond matrix:
          Mat Object: 1 MPI processes
            type: mpiaij
            rows=624, cols=624
            total: nonzeros=25536, allocated nonzeros=25536
            total number of mallocs used during MatSetValues calls=0
              using I-node (on process 0) routines: found 336 nodes, limit used 
is 5
        estimating eigenvalues using noisy right hand side
      maximum iterations=2, nonzero initial guess
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using NONE norm type for convergence test
    PC Object: (mg_levels_1_) 1 MPI processes
      type: sor
        type = local_symmetric, iterations = 1, local iterations = 1, omega = 
1.      linear system matrix = precond matrix:
      Mat Object: 1 MPI processes
        type: mpiaij
        rows=624, cols=624
        total: nonzeros=25536, allocated nonzeros=25536
        total number of mallocs used during MatSetValues calls=0
          using I-node (on process 0) routines: found 336 nodes, limit used is 
5  Up solver (post-smoother) same as down solver (pre-smoother)
  linear system matrix = precond matrix:
  Mat Object: 1 MPI processes
    type: mpiaij
    rows=624, cols=624
    total: nonzeros=25536, allocated nonzeros=25536
    total number of mallocs used during MatSetValues calls=0
      using I-node (on process 0) routines: found 336 nodes, limit used is 5


Best regards,

Xiaofeng


> On Jun 14, 2025, at 07:28, Barry Smith <bsm...@petsc.dev> wrote:
> 
> 
>   Matt,
> 
>    Perhaps we should add options -ksp_monitor_debug and -snes_monitor_debug 
> that turn on all possible monitoring for the (possibly) nested solvers and 
> all of their converged reasons also? Note this is not completely trivial 
> because each preconditioner will have to supply its list based on the current 
> solver options for it.
> 
>    Then we won't need to constantly list a big string of problem specific 
> monitor options to ask the user to use.
> 
>   Barry
> 
> 
> 
> 
>> On Jun 13, 2025, at 9:09 AM, Matthew Knepley <knep...@gmail.com> wrote:
>> 
>> On Thu, Jun 12, 2025 at 10:55 PM hexioafeng <hexiaof...@buaa.edu.cn 
>> <mailto:hexiaof...@buaa.edu.cn>> wrote:
>>> Dear authors,
>>> 
>>> I tried -pc_type game -pc_gamg_parallel_coarse_grid_solver and -pc_type 
>>> field split -pc_fieldsplit_detect_saddle_point -fieldsplit_0_ksp_type 
>>> pronely -fieldsplit_0_pc_type game -fieldsplit_0_mg_coarse_pc_type sad 
>>> -fieldsplit_1_ksp_type pronely -fieldsplit_1_pc_type Jacobi 
>>> _fieldsplit_1_sub_pc_type for , both options got the KSP_DIVERGE_PC_FAILED 
>>> error.
>> 
>> With any question about convergence, we need to see the output of
>> 
>>   -ksp_view -ksp_monitor_true_residual -ksp_converged_reason 
>> -fieldsplit_0_mg_levels_ksp_monitor_true_residual 
>> -fieldsplit_0_mg_levels_ksp_converged_reason 
>> -fieldsplit_1_ksp_monitor_true_residual -fieldsplit_1_ksp_converged_reason
>> 
>> and all the error output.
>> 
>>   Thanks,
>> 
>>      Matt
>>  
>>> Thanks,
>>> 
>>> Xiaofeng
>>> 
>>> 
>>>> On Jun 12, 2025, at 20:50, Mark Adams <mfad...@lbl.gov 
>>>> <mailto:mfad...@lbl.gov>> wrote:
>>>> 
>>>> 
>>>> 
>>>> On Thu, Jun 12, 2025 at 8:44 AM Matthew Knepley <knep...@gmail.com 
>>>> <mailto:knep...@gmail.com>> wrote:
>>>>> On Thu, Jun 12, 2025 at 4:58 AM Mark Adams <mfad...@lbl.gov 
>>>>> <mailto:mfad...@lbl.gov>> wrote:
>>>>>> Adding this to the PETSc mailing list,
>>>>>> 
>>>>>> On Thu, Jun 12, 2025 at 3:43 AM hexioafeng <hexiaof...@buaa.edu.cn 
>>>>>> <mailto:hexiaof...@buaa.edu.cn>> wrote:
>>>>>>> 
>>>>>>> Dear Professor,
>>>>>>> 
>>>>>>> I hope this message finds you well.
>>>>>>> 
>>>>>>> I am an employee at a CAE company and a heavy user of the PETSc 
>>>>>>> library. I would like to thank you for your contributions to PETSc and 
>>>>>>> express my deep appreciation for your work.
>>>>>>> 
>>>>>>> Recently, I encountered some difficulties when using PETSc to solve 
>>>>>>> structural mechanics problems with Lagrange multiplier constraints. 
>>>>>>> After searching extensively online and reviewing several papers, I 
>>>>>>> found your previous paper titled "Algebraic multigrid methods for 
>>>>>>> constrained linear systems with applications to contact problems in 
>>>>>>> solid mechanics" seems to be the most relevant and helpful. 
>>>>>>> 
>>>>>>> The stiffness matrix I'm working with, K, is a block saddle-point 
>>>>>>> matrix of the form (A00 A01; A10 0), where A00 is singular—just as 
>>>>>>> described in your paper, and different from many other articles . I 
>>>>>>> have a few questions regarding your work and would greatly appreciate 
>>>>>>> your insights:
>>>>>>> 
>>>>>>> 1. Is the AMG/KKT method presented in your paper available in PETSc? I 
>>>>>>> tried using CG+GAMG directly but received a KSP_DIVERGED_PC_FAILED 
>>>>>>> error. I also attempted to use CG+PCFIELDSPLIT with the following 
>>>>>>> options:  
>>>>>> 
>>>>>> No
>>>>>>  
>>>>>>>    
>>>>>>>     -pc_type fieldsplit -pc_fieldsplit_detect_saddle_point 
>>>>>>> -pc_fieldsplit_type schur -pc_fieldsplit_schur_precondition selfp 
>>>>>>> -pc_fieldsplit_schur_fact_type full -fieldsplit_0_ksp_type preonly 
>>>>>>> -fieldsplit_0_pc_type gamg -fieldsplit_1_ksp_type preonly 
>>>>>>> -fieldsplit_1_pc_type bjacobi  
>>>>>>>    
>>>>>>>    Unfortunately, this also resulted in a KSP_DIVERGED_PC_FAILED error. 
>>>>>>> Do you have any suggestions?
>>>>>>> 
>>>>>>> 2. In your paper, you compare the method with Uzawa-type approaches. To 
>>>>>>> my understanding, Uzawa methods typically require A00 to be invertible. 
>>>>>>> How did you handle the singularity of A00 to construct an M-matrix that 
>>>>>>> is invertible?
>>>>>>> 
>>>>>> 
>>>>>> You add a regularization term like A01 * A10 (like springs). See the 
>>>>>> paper or any reference to augmented lagrange or Uzawa
>>>>>> 
>>>>>> 
>>>>>>> 3. Can i implement the AMG/KKT method in your paper using existing AMG 
>>>>>>> APIs? Implementing a production-level AMG solver from scratch would be 
>>>>>>> quite challenging for me, so I’m hoping to utilize existing AMG 
>>>>>>> interfaces within PETSc or other packages.
>>>>>>> 
>>>>>> 
>>>>>> You can do Uzawa and make the regularization matrix with matrix-matrix 
>>>>>> products. Just use AMG for the A00 block.
>>>>>> 
>>>>>>  
>>>>>>> 4. For saddle-point systems where A00 is singular, can you recommend 
>>>>>>> any more robust or efficient solutions? Alternatively, are you aware of 
>>>>>>> any open-source software packages that can handle such cases 
>>>>>>> out-of-the-box?
>>>>>>> 
>>>>>> 
>>>>>> No, and I don't think PETSc can do this out-of-the-box, but others may 
>>>>>> be able to give you a better idea of what PETSc can do.
>>>>>> I think PETSc can do Uzawa or other similar algorithms but it will not 
>>>>>> do the regularization automatically (it is a bit more complicated than 
>>>>>> just A01 * A10)
>>>>> 
>>>>> One other trick you can use is to have
>>>>> 
>>>>>   -fieldsplit_0_mg_coarse_pc_type svd
>>>>> 
>>>>> This will use SVD on the coarse grid of GAMG, which can handle the null 
>>>>> space in A00 as long as the prolongation does not put it back in. I have 
>>>>> used this for the Laplacian with Neumann conditions and for freely 
>>>>> floating elastic problems.
>>>>> 
>>>> 
>>>> Good point.
>>>> You can also use -pc_gamg_parallel_coarse_grid_solver to get GAMG to use a 
>>>> on level iterative solver for the coarse grid.
>>>>  
>>>>>   Thanks,
>>>>> 
>>>>>      Matt
>>>>>  
>>>>>> Thanks,
>>>>>> Mark
>>>>>>> 
>>>>>>> Thank you very much for taking the time to read my email. Looking 
>>>>>>> forward to hearing from you.
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> Sincerely,  
>>>>>>> 
>>>>>>> Xiaofeng He
>>>>>>> -----------------------------------------------------
>>>>>>> 
>>>>>>> Research Engineer
>>>>>>> 
>>>>>>> Internet Based Engineering, Beijing, China
>>>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> -- 
>>>>> What most experimenters take for granted before they begin their 
>>>>> experiments is infinitely more interesting than any results to which 
>>>>> their experiments lead.
>>>>> -- Norbert Wiener
>>>>> 
>>>>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!e0AMQ70ZmiHfz7acc2y7bj16p9tQITJx5wjrNfKN3d2Iu_q0ghMi5CQvuisbw_fPYd8w-s_2iyDRH5xvxgTZ9JBIm5UKKQ$
>>>>>   
>>>>> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!f-YJSzthRa7atIa1xs1GPHW53hGIqSenvp1eO2kDsSyf4jv1_Vp0kL9Lg8pyyPeG8al4Im8XlLqGRHw1FxYh$>
>> 
>> 
>> 
>> --
>> What most experimenters take for granted before they begin their experiments 
>> is infinitely more interesting than any results to which their experiments 
>> lead.
>> -- Norbert Wiener
>> 
>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!e0AMQ70ZmiHfz7acc2y7bj16p9tQITJx5wjrNfKN3d2Iu_q0ghMi5CQvuisbw_fPYd8w-s_2iyDRH5xvxgTZ9JBIm5UKKQ$
>>   
>> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!f-YJSzthRa7atIa1xs1GPHW53hGIqSenvp1eO2kDsSyf4jv1_Vp0kL9Lg8pyyPeG8al4Im8XlLqGRHw1FxYh$>
> 

Reply via email to