Thank you all for the answers.
I've just started in a group where the code has been running for some time
on the CPU and we started trying to run it on the GPU to see a processing
gain.
I'm going to talk here about the points you've already raised.
Thank you very much!
Em qui., 31 de ago. de 20
Yikes, sorry I missed that the first run was CPU and the second GPU.
The run on the CPU is indicative of a very bad preconditioner. It doesn't
really converge. When the true residual norm jumps by a factor of 10^3 at the
first iteration, this means the ILU preconditioner is just not approp
On Wed, Aug 30, 2023 at 8:46 PM Barry Smith wrote:
>
>What convergence do you get without the GPU matrix and vector
> operations?
Barry, that was in the original email
>
>
>Can you try the GPU run with -ksp_type gmres -ksp_pc_side right ?
>
>For certain problems, ILU can produce cat
What convergence do you get without the GPU matrix and vector operations?
Can you try the GPU run with -ksp_type gmres -ksp_pc_side right ?
For certain problems, ILU can produce catastrophically bad preconditioners.
Barry
> On Aug 30, 2023, at 4:41 PM, Ramoni Z. Sedano Azevedo
Hi, Ramoni
Do you have a reproducible example? Usually it is because the cpu and
gpu are out of synchronization. It could be a user's problem or petsc's.
Thanks.
--Junchao Zhang
On Wed, Aug 30, 2023 at 4:13 PM Ramoni Z. Sedano Azevedo <
ramoni.zsed...@gmail.com> wrote:
> Hello,
>
> I'm exe
Hello,
I'm executing a code in Fortran using PETSc with MPI via CPU and I would
like to execute it using GPU.
PETSc is configured as follows:
./configure \
--prefix=${PWD}/installdir \
--with-fortran \
--with-fortran-kernels=true \
--with-cuda \
--download-fblaslapack \
--with-scalar-type=co