"How did you compile your code, using nvcc, mpicc or mpicxx? I expect
PetscScalar to be std::complex with C++ or _Complex with C. So I don't
understand why "PetscScalar still refers to thrust::complex"."I use mpicxx and nvcc.I also attached my cmakelists.txtBest,
Langtian L
Hi Junchao, I pulled and reinstalled. Although the PetscScalar still refers to thrust::complex, based on your suggestion, cast the pointer to PetscScalar * can work. It has the same storage order as std::complex. Mat kernel ; auto * h_kernel = reinterpret_cast < PetscScalar *>( h_kernel_temp ) ; P
The MR is merged to petsc/release.
BTW, in MatCreateDense, the data pointer has to be a host pointer. It is
better to always use PetscScalar* (instead of std::complex*) to do
the cast.
--Junchao Zhang
On Thu, Oct 3, 2024 at 3:03 AM 刘浪天 wrote:
> Okay. I see :D
> Langtian
Okay. I see :D Langtian Liu Institute for Theorectical Physics, Justus-Liebig-University Giessen Heinrich-Buff-Ring 16, 35392 Giessen Germany email: langtian@icloud.com Tel: (+49)641 99 33342 On Oct 3, 2024, at 9:58 AM, Jose E. Roman wrote: You have to wait until the merg
You have to wait until the merge request has the label "Merged" instead of
"Open".
> El 3 oct 2024, a las 9:55, 刘浪天 via petsc-users
> escribió:
>
> Hello Junchao,
>
> Okay. Thank you for helping find this bug. I pull the newest version of petsc
> today. It seems this error has not been fixe
Hello Junchao, Okay. Thank you for helping find this bug. I pull the newest version of petsc today. It seems this error has not been fixed in the present release version. Maybe I should wait for some days. Best wishes, Langtian On Oct 3, 2024, at 12:12 AM, Junchao Zhang
wrote: Hi, Langtian, Than
Hi, Langtian,
Thanks for the configure.log and I now see what's wrong. Since you
compiled your code with nvcc, we mistakenly thought petsc was configured
with cuda.
It is fixed in
https://urldefense.us/v3/__https://gitlab.com/petsc/petsc/-/merge_requests/7909__;!!G_uCfscf7eWS!c2Xrs8q_dNGecG
On Wed, Oct 2, 2024 at 3:57 AM 刘浪天 via petsc-users
wrote:
> Hi all,
>
> I am using the PETSc and SLEPc to solve the Faddeev equation of baryons. I
> encounter a problem of function MatCreateDense when changing from CPU to
> CPU-GPU computations.
> At first, I write the codes in purely CPU computa
On Wed, Oct 2, 2024 at 6:11 AM 刘浪天 via petsc-users
wrote:
> I cannot declare everything as PetscScalar, my strategy is computing the
> elements of matrix on GPU blocks by blocks and copying them back to the
> CPU. Finally computing the eigenvalues using SLEPc on CPU.
>
Then you have to either
a
I cannot declare everything as PetscScalar, my strategy is computing the elements of matrix on GPU blocks by blocks and copying them back to the CPU. Finally computing the eigenvalues using SLEPc on CPU. Langtian Liu Institute for Theorectical Physics, Justus-Liebig-University
Does it work if you declare everything as PetscScalar instead of
cuDoubleComplex?
> El 2 oct 2024, a las 11:23, 刘浪天 escribió:
>
> Hi Jose,
>
> Since my matrix is two large, I cannot create the Mat on GPU. So I still want
> to create and compute the eigenvalues of this matrix on CPU using SLEP
Hi Jose, Since my matrix is two large, I cannot create the Mat on GPU. So I still want to create and compute the eigenvalues of this matrix on CPU using SLEPc. Best, Langtian Liu Institute for Theorectical Physics, Justus-Liebig-University Giessen Heinrich-Buff-Ring 16, 35392
For the CUDA case you should use MatCreateDenseCUDA() instead of
MatCreateDense(). With this you pass a pointer with the data on the GPU memory.
But I guess "new cuDoubleComplex[dim*dim]" is allocating on the CPU, you should
use cudaMalloc() instead.
Jose
> El 2 oct 2024, a las 10:56, 刘浪天 via
13 matches
Mail list logo