Possible the Xcode command line tools need updating. Take a look at
https://stackoverflow.com/questions/42538171/how-to-update-xcode-command-line-tools
> On Sep 20, 2023, at 10:43 PM, Ju Liu wrote:
>
> Hi PETSc team:
>
> I recently got my Xcode command line tools upgraded to version 15. When
On Wed, 20 Sept 2023 at 12:17, Ce Qin wrote:
>
> Dear all,
>
> I am currently implementing a multigrid solver for Maxwell's equations in 3D.
> The AFW smoother has excellent convergence properties for Maxwell's
> equations. I
> noticed that PCPATCH provides such types of smoothers. However, I am
Thank you for your help. I will try this solution.
Sreeram
On Wed, Sep 20, 2023 at 9:24 AM Barry Smith wrote:
>
> Use VecCreate(), VecSetSizes(), VecSetType() and MatCreate(),
> MatSetSizes(), and MatSetType() instead of the convience functions
> VecCreateMPICUDA() and MatCreateShell().
>
>
>
Just FYI, with the configuration below, PETSc 3.19.5 works fine with Visual
Studio 2022, statically linked to MS MPI and Intel MKL 2023.
Thuc
-Original Message-
From: petsc-users [mailto:petsc-users-boun...@mcs.anl.gov] On Behalf Of Thuc Bui
Sent: Tuesday, September 19, 2023 11:24 PM
To:
Since you create the matrices alone, I think you should use
-A_mat_type aijcusparse \
-P_mat_type aijcusparse \
instead of
-A_dm_mat_type aijcusparse \
-P_dm_mat_type aijcusparse \
Perhaps you can remove "-dm_vec_type cuda" as dm mat type already
determines dm vector type
--Junchao Zha
You are missing a call to DMSetFromOptions
On Wed, Sep 20, 2023, 19:20 Ramoni Z. Sedano Azevedo <
ramoni.zsed...@gmail.com> wrote:
> Thanks for the tip. Using dm_mat_type and dm_vec_type the code runs.
> ./${executable} \
> -A_dm_mat_type aijcusparse \
> -P_dm_mat_type aijcusparse \
> -dm_vec_
Thanks for the tip. Using dm_mat_type and dm_vec_type the code runs.
./${executable} \
-A_dm_mat_type aijcusparse \
-P_dm_mat_type aijcusparse \
-dm_vec_type cuda \
-use_gpu_aware_mpi 0 \
-em_ksp_monitor_true_residual \
-em_ksp_type bcgs \
-em_pc_type bjacobi \
-em_sub_pc_type ilu \
-em_su
Hi Anna,
I think you are looking at an output from a processor that does not have
the face set loaded.
If I load your mesh with a single MPI, I see:
DM Object: DM_0x8400_0 1 MPI processes
type: plex
DM_0x8400_0 in 3 dimensions:
Number of 0-cells per rank: 27499
Number of 1-cells pe
Try to also add *-dm_mat_type aijcusparse -dm_vec_type cuda*
--Junchao Zhang
On Wed, Sep 20, 2023 at 10:21 AM Ramoni Z. Sedano Azevedo <
ramoni.zsed...@gmail.com> wrote:
>
> Hey!
>
> I am using PETSc in a Fortran code and we use MPI parallelization. We
> would like to use GPU parallelization,
Since you are creating some vectors from the DM you need also the options
database option -dm_vec_type cuda
Barry
> On Sep 20, 2023, at 11:21 AM, Ramoni Z. Sedano Azevedo
> wrote:
>
>
> Hey!
>
> I am using PETSc in a Fortran code and we use MPI parallelization. We would
> like to use
Hey!
I am using PETSc in a Fortran code and we use MPI parallelization. We would
like to use GPU parallelization, but we are encountering an error.
PETSc is configured as follows:
#!/bin/bash
./configure \
--prefix=${PWD}/installdir \
--with-fortran \
--with-fortran-kernels=true \
--with-cuda
1 KSP preconditioned resid norm 1.027326121286e-13 true resid norm
4.122100161195e-13 ||r(i)||/||b|| 1.256880059386e-03
The residual norms are very small. From the final column we see that the
relative decrease in the residual norm was only 1.e-3 meaning the initial
linear system residual
Will do. The output is very long, I'm sending you the last 20 iteration of KSP
when the nonlinear solver did not converge. That's what i have:
Linear solve converged due to CONVERGED_ITS iterations 1
9980 KSP preconditioned resid norm 7.911586848688e-14 true resid norm
3.571299151668e-13 ||r(i
Run with -snes_monitor and -ksp_monitor_true_residual and send the output
> On Sep 20, 2023, at 9:34 AM, Harry-Arthur Cousin
> wrote:
>
> Hello,
>
> I'm on petsc version 3.4.5.
>
> My program has a time loop where I have to solve a non-linear system of the
> form F(X) = D*L*J + I + kappa
Use VecCreate(), VecSetSizes(), VecSetType() and MatCreate(), MatSetSizes(),
and MatSetType() instead of the convience functions VecCreateMPICUDA() and
MatCreateShell().
> On Sep 19, 2023, at 8:44 PM, Sreeram R Venkat wrote:
>
> Thank you for your reply.
>
> Let's call this matrix M:
> (
Hello,
I'm on petsc version 3.4.5.
My program has a time loop where I have to solve a non-linear system of the
form F(X) = D*L*J + I + kappa(X) where X is an unknown vector, D and L are
diagonal square matrices, J is a vector that can be calculated by solving a
linear system with X and kappa i
Hello,
I am trying to load a mesh, created via the gmsh api, into a DMPlex. The code
is seen below:
6 gmsh::initialize();
7 elsize0 = 1.0;
8 double a = 20.0;
9 double b = 20.0;
10 double c = 10.0;
11 double d = 30.0;
12 double e = 70.0;
13
14 gmsh::
Dear all,
I am currently implementing a multigrid solver for Maxwell's equations in
3D.
The AFW smoother has excellent convergence properties for Maxwell's
equations. I
noticed that PCPATCH provides such types of smoothers. However, I am using
the
deal.II library to build the patches. PCPATCH seem
18 matches
Mail list logo