Dear authors,

I tried -pc_type game -pc_gamg_parallel_coarse_grid_solver and -pc_type field 
split -pc_fieldsplit_detect_saddle_point -fieldsplit_0_ksp_type pronely 
-fieldsplit_0_pc_type game -fieldsplit_0_mg_coarse_pc_type sad 
-fieldsplit_1_ksp_type pronely -fieldsplit_1_pc_type Jacobi 
_fieldsplit_1_sub_pc_type for , both options got the KSP_DIVERGE_PC_FAILED 
error.

Thanks,

Xiaofeng


> On Jun 12, 2025, at 20:50, Mark Adams <mfad...@lbl.gov> wrote:
> 
> 
> 
> On Thu, Jun 12, 2025 at 8:44 AM Matthew Knepley <knep...@gmail.com 
> <mailto:knep...@gmail.com>> wrote:
>> On Thu, Jun 12, 2025 at 4:58 AM Mark Adams <mfad...@lbl.gov 
>> <mailto:mfad...@lbl.gov>> wrote:
>>> Adding this to the PETSc mailing list,
>>> 
>>> On Thu, Jun 12, 2025 at 3:43 AM hexioafeng <hexiaof...@buaa.edu.cn 
>>> <mailto:hexiaof...@buaa.edu.cn>> wrote:
>>>> 
>>>> Dear Professor,
>>>> 
>>>> I hope this message finds you well.
>>>> 
>>>> I am an employee at a CAE company and a heavy user of the PETSc library. I 
>>>> would like to thank you for your contributions to PETSc and express my 
>>>> deep appreciation for your work.
>>>> 
>>>> Recently, I encountered some difficulties when using PETSc to solve 
>>>> structural mechanics problems with Lagrange multiplier constraints. After 
>>>> searching extensively online and reviewing several papers, I found your 
>>>> previous paper titled "Algebraic multigrid methods for constrained linear 
>>>> systems with applications to contact problems in solid mechanics" seems to 
>>>> be the most relevant and helpful. 
>>>> 
>>>> The stiffness matrix I'm working with, K, is a block saddle-point matrix 
>>>> of the form (A00 A01; A10 0), where A00 is singular—just as described in 
>>>> your paper, and different from many other articles . I have a few 
>>>> questions regarding your work and would greatly appreciate your insights:
>>>> 
>>>> 1. Is the AMG/KKT method presented in your paper available in PETSc? I 
>>>> tried using CG+GAMG directly but received a KSP_DIVERGED_PC_FAILED error. 
>>>> I also attempted to use CG+PCFIELDSPLIT with the following options:  
>>> 
>>> No
>>>  
>>>>    
>>>>     -pc_type fieldsplit -pc_fieldsplit_detect_saddle_point 
>>>> -pc_fieldsplit_type schur -pc_fieldsplit_schur_precondition selfp 
>>>> -pc_fieldsplit_schur_fact_type full -fieldsplit_0_ksp_type preonly 
>>>> -fieldsplit_0_pc_type gamg -fieldsplit_1_ksp_type preonly 
>>>> -fieldsplit_1_pc_type bjacobi  
>>>>    
>>>>    Unfortunately, this also resulted in a KSP_DIVERGED_PC_FAILED error. Do 
>>>> you have any suggestions?
>>>> 
>>>> 2. In your paper, you compare the method with Uzawa-type approaches. To my 
>>>> understanding, Uzawa methods typically require A00 to be invertible. How 
>>>> did you handle the singularity of A00 to construct an M-matrix that is 
>>>> invertible?
>>>> 
>>> 
>>> You add a regularization term like A01 * A10 (like springs). See the paper 
>>> or any reference to augmented lagrange or Uzawa
>>> 
>>> 
>>>> 3. Can i implement the AMG/KKT method in your paper using existing AMG 
>>>> APIs? Implementing a production-level AMG solver from scratch would be 
>>>> quite challenging for me, so I’m hoping to utilize existing AMG interfaces 
>>>> within PETSc or other packages.
>>>> 
>>> 
>>> You can do Uzawa and make the regularization matrix with matrix-matrix 
>>> products. Just use AMG for the A00 block.
>>> 
>>>  
>>>> 4. For saddle-point systems where A00 is singular, can you recommend any 
>>>> more robust or efficient solutions? Alternatively, are you aware of any 
>>>> open-source software packages that can handle such cases out-of-the-box?
>>>> 
>>> 
>>> No, and I don't think PETSc can do this out-of-the-box, but others may be 
>>> able to give you a better idea of what PETSc can do.
>>> I think PETSc can do Uzawa or other similar algorithms but it will not do 
>>> the regularization automatically (it is a bit more complicated than just 
>>> A01 * A10)
>> 
>> One other trick you can use is to have
>> 
>>   -fieldsplit_0_mg_coarse_pc_type svd
>> 
>> This will use SVD on the coarse grid of GAMG, which can handle the null 
>> space in A00 as long as the prolongation does not put it back in. I have 
>> used this for the Laplacian with Neumann conditions and for freely floating 
>> elastic problems.
>> 
> 
> Good point.
> You can also use -pc_gamg_parallel_coarse_grid_solver to get GAMG to use a on 
> level iterative solver for the coarse grid.
>  
>>   Thanks,
>> 
>>      Matt
>>  
>>> Thanks,
>>> Mark
>>>> 
>>>> Thank you very much for taking the time to read my email. Looking forward 
>>>> to hearing from you.
>>>> 
>>>> 
>>>> 
>>>> Sincerely,  
>>>> 
>>>> Xiaofeng He
>>>> -----------------------------------------------------
>>>> 
>>>> Research Engineer
>>>> 
>>>> Internet Based Engineering, Beijing, China
>>>> 
>> 
>> 
>> 
>> -- 
>> What most experimenters take for granted before they begin their experiments 
>> is infinitely more interesting than any results to which their experiments 
>> lead.
>> -- Norbert Wiener
>> 
>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YnpI8yJI5A6InOexOleRZqZzYOiuR4sw1PSdo040lUsJmZvbh-i6AWIkRtbavv78rzOsshSHnj3REra61DB188ztbxlYbg$
>>   
>> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YnpI8yJI5A6InOexOleRZqZzYOiuR4sw1PSdo040lUsJmZvbh-i6AWIkRtbavv78rzOsshSHnj3REra61DB188yxPXfiRQ$
>>  >

Reply via email to