For one MPI rank, it looks like you can use -pc_type cholesky 
-pc_factor_mat_solver_type cupm though it is not documented in 
https://urldefense.us/v3/__https://petsc.org/release/overview/linear_solve_table/*direct-solvers__;Iw!!G_uCfscf7eWS!YpFrRe8Wul8hbJjnWia9KlpTHLeU2HBIpo45YA5ZnmqISNTy0txGndaBOsORw3xw3Q0Uhvq0Bsb5eJhCKlCe9Bk$
 

   Of if you also ./configure --download-kokkos --download-kokkos-kernels you 
can use -pc_factor_mat_solver_type kokkos if you also this may also work for 
multiple GPUs but that is not documented in the table either (Junchao) Nor are 
sparse Kokkos or CUDA stuff documented (if they exist) in the table.


   Barry



> On Jul 24, 2024, at 2:44 PM, Sreeram R Venkat <srven...@utexas.edu> wrote:
> 
> This Message Is From an External Sender
> This message came from outside your organization.
> I have an SPD dense matrix of size NxN, where N can range from 10^4-10^5. Are 
> there any Cholesky factorization/solve routines for it in PETSc (or in any of 
> the external libraries)? If possible, I want to use GPU acceleration with 1 
> or more GPUs. The matrix type can be MATSEQDENSE/MATMPIDENSE or 
> MATSEQDENSECUDA/MATMPIDENSECUDA accordingly. If it is possible to do the 
> factorization beforehand and store it to do the triangular solves later, that 
> would be great.
> 
> Thanks,
> Sreeram

Reply via email to