etscInt :: f(*)
PetscErrorCode z
end subroutine
end interface
The compiler message is probably due to the type of an argument not matching
the expected one. In particular, if you are passing NULL in one of the array
arguments, you should use PETSC_NULL_INTEGER_ARRAY and not PETSC_NULL_INTEGER.
Jo
uWHGdn1_3Btpjtuwg$ >(ISDestroy
<https://urldefense.us/v3/__https://petsc.org/main/manualpages/IS/ISDestroy/__;!!G_uCfscf7eWS!YYxJCjLuXQe2lb7pLGBk_jFm3tcpXjp7fMV9Z2SjL4wXwdnCNa3qKfT2WvEkGU5XtL60cuWHGdn1_3AhggVmlQ$ >(&field2));
ierr=PetscFree(idx1);
ierr=PetscFree(idx2);
fsSet
We would like to solve an FEA problem (unstructured grid) where the nodes
on the elements have different dofs. For example the corner nodes have
only dof 0 and then mid-side nodes have dofs 0,1,2 (think 8 node
serendipity element). This is a multi-physics problem so we are looking to
use the fie
Problem solved. I my haste clean up the code, I messed up the
definition of by rhs values. b(:) instead of b(*).
The conversion to the main branch seems to be complete and all my test
cases seem to work correctly now.
-
On 3/24/25 3:26 PM, Sanjay Govindjee wrote:
My odessy to update my code
My odessy to update my code to the main branch continues...I am now
encountering an error with VecSetValue:
[0]PETSC ERROR: VecSetValues() at
/Users/sg/petsc-3.22.4main/src/vec/vec/interface/rvector.c:926 Null
Pointer: Parameter # 4
This is on a single processor run -- I got tired of c
should not initialize it with any
value before the call.
Barry
On Mar 23, 2025, at 11:10 PM, Sanjay Govindjee via petsc-users
wrote:
Barry,
I now have a compiled version of my code using the main
branch. When I run however I am getting an error in matcreate_
value before the call.
Barry
On Mar 23, 2025, at 11:10 PM, Sanjay Govindjee via petsc-users
wrote:
Barry,
I now have a compiled version of my code using the main
branch. When I run however I am getting an error in matcreate_(
) when I try to solve (actually
I take on item back...I was a failure at using the debugger. Here is
the backtrace. MatCreate seems to have valid data :/
* thread #1, queue = 'com.apple.main-thread', stop reason = signal
SIGSTOP
* frame #0: 0x7fff69d92746
libsystem_kernel.dylib`__semwait_signal + 10
hould not initialize it with any value before the
> call.
>
>Barry
>
>
> On Mar 23, 2025, at 11:10 PM, Sanjay Govindjee via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
> Barry,
>
> I now have a compiled version of my code using the main branch
Barry,
I now have a compiled version of my code using the main branch. When
I run however I am getting an error in matcreate_( ) when I try to solve
(actually just set up the matrix). The console window reports
[0]PETSC ERROR: matcreate_() at
/Users/sg/petsc-3.22.4main/gnu/ftn/mat/ut
Working off the main branch, I am trying to compile a code that uses
VecGetArrayReadF90 and VecRestoreArrayReadF90.
The subroutines all compile but at link, I am encountering
Undefined symbols for architecture x86_64:
"_vecgetarrayreadf90_", referenced from:
_parbmat_ in parbm
e
The compiler message is probably due to the type of an argument not matching
the expected one. In particular, if you are passing NULL in one of the array
arguments, you should use PETSC_NULL_INTEGER_ARRAY and not PETSC_NULL_INTEGER.
Jose
El 23 mar 2025, a las 8:25, Sanjay Govindjee via
pet
at (1)
-
On 3/21/25 7:17 AM, Barry Smith wrote:
I have just pushed a major update to the Fortran interface to the main PETSc
git branch. Could you please try to work with main (to become release in a
couple of weeks) with your Fortran code as we debug the problem? This will save
you
ne. In particular, if you are passing NULL in one of the array
arguments, you should use PETSC_NULL_INTEGER_ARRAY and not PETSC_NULL_INTEGER.
Jose
El 23 mar 2025, a las 8:25, Sanjay Govindjee via
petsc-users escribió:
Small update. I managed to eliminate all the errors associated with PetscViewer
and belo
bably due to the type of an argument not matching
the expected one. In particular, if you are passing NULL in one of the array
arguments, you should use PETSC_NULL_INTEGER_ARRAY and not PETSC_NULL_INTEGER.
Jose
El 23 mar 2025, a las 8:25, Sanjay Govindjee via
petsc-users escribió:
Small upda
(1)
-
On 3/21/25 7:17 AM, Barry Smith wrote:
I have just pushed a major update to the Fortran interface to the
main PETSc git branch. Could you please try to work with main (to
become release in a couple of weeks) with your Fortran code as we
debug the problem? This will save you a lot of
to the type of an argument not matching
the expected one. In particular, if you are passing NULL in one of the array
arguments, you should use PETSC_NULL_INTEGER_ARRAY and not PETSC_NULL_INTEGER.
Jose
El 23 mar 2025, a las 8:25, Sanjay Govindjee via petsc-users
escribió:
Small update. I manag
should use PETSC_NULL_INTEGER_ARRAY and not PETSC_NULL_INTEGER.
Jose
El 23 mar 2025, a las 8:25, Sanjay Govindjee via petsc-users
escribió:
Small update. I managed to eliminate all the errors associated with
PetscViewer and below (it had to do with the fact that I had not yet built a
module
ge is probably due to the type of an argument not matching
the expected one. In particular, if you are passing NULL in one of the array
arguments, you should use PETSC_NULL_INTEGER_ARRAY and not PETSC_NULL_INTEGER.
Jose
El 23 mar 2025, a las 8:25, Sanjay Govindjee via
petsc-users escribió:
Small updat
anch. Could you please try to work with main (to
become release in a couple of weeks) with your Fortran code as we
debug the problem? This will save you a lot of work and hopefully
make the debugging more straightforward.
You can send the same output with the debugger if it crashes in
the
ecome release in a couple of weeks) with your Fortran code as we
debug the problem? This will save you a lot of work and hopefully make
the debugging more straightforward.
You can send the same output with the debugger if it crashes in
the main branch and I can try to track down what is going
$
---
On Fri, Mar 21, 2025 at 12:39 PM Satish Balay
wrote:
> On Fri, 21 Mar 2025, Sanjay Govindjee via petsc-users wrote:
>
> > Thanks. I will give that a try. To be clear, I should do
> > > > git checkout origin/main
> > > > make all
> > > >
debugger if it crashes in
the main branch and I can try to track down what is going wrong.
Barry
On Mar 21, 2025, at 12:37 AM, Sanjay Govindjee via petsc-users
wrote:
I am trying to upgrade my code to PETSc 3.22.4 (the code was last
updated to 3.19.4 or perhaps 3.18.1, I've los
I am trying to upgrade my code to PETSc 3.22.4 (the code was last
updated to 3.19.4 or perhaps 3.18.1, I've lost track). I've been using
this code with PETSc for over 20 years.
To get my code to compile and link during this update, I only need to
make two changes; one was to use PetscViewerPus
We do exactly this by using the same prefix for each file and bump the
number with each load step, then paraview does the stacking
automagically for us. However we write out VTU files for our FEA
computations.
Perhaps you could examine some of the other formats that paraview can
read and see
In the release notes, it mentions the introduction of
PETSC_NULL_INTEGER_ARRAY.
Am I correct in interpreting this to mean that I can/should change my usage
of
PETSC_NULL_INTEGER(1) in function calls to PETSc routines to
PETSC_NULL_INTEGER_ARRAY?
-sanjay
Barry,
As a regular used of PETSc in Fortran, I see no problem with these
changes to the Fortran interface.
-sanjay
On 6/5/24 10:14 AM, Barry Smith wrote:
I am working to improve PETSc support for Fortran and to automate mo
I was wondering if anyone has build experience with PETSc + FORTRAN on
an M2-based MAC? In particular, I am looking for compiler recommendations.
-sanjay
did you mean to write
type (userctx) ctx
in this example?
subroutine func(snes, x, f, ctx, ierr)
SNES snes
Vec x,f
type (userctx) user
PetscErrorCode ierr
...
external func
SNESSetFunction(snes, r, func, ctx, ierr)
SNES snes
Vec r
PetscErrorCode ierr
type (userctx) user
On Tue, Dec 1
I agree with Bruce that having a link to
https://petsc.org/release/manual/fortran/ at the top of the C/Fortran
API page (https://petsc.org/release/manualpages/) would be helpful.
The C descriptions themselves are 98% of the way there for Fortran users
(like myself). The only time that more i
ut using MPI_IRecv as it will require a bit of rewriting
since right now I process the received
data sequentially after each blocking MPI_Recv -- clearly slower but easier to
code.
Thanks again for the help.
-sanjay
On 5/30/19 4:48 AM, Lawrence Mitchell wrote:
Hi Sanjay,
On 30 May 2019, a
e+06, Malloc Delta= 0
>>>>>> >> 13:RSS Delta=1.24932e+08, Malloc Delta= 0
>>>>>> >>
>>>>>> >> So we did see RSS increase in 4k-page sizes after KSPSolve. As long
>>>>>&g
>>> before the receiving processes had completed (which resulted
in data loss as the buffers are freed after the call to the
routine). MPI_Barrier was the solution proposed
>>> to us. I don't think I can dispense with it, but will think
about some more.
ecv as it will require a bit of rewriting
since right now I process the received
data sequentially after each blocking MPI_Recv -- clearly slower but easier to
code.
Thanks again for the help.
-sanjay
On 5/30/19 4:48 AM, Lawrence Mitchell wrote:
Hi Sanjay,
On 30 May 2019, at 08:58, Sanjay Govi
M, Stefano Zampini wrote:
On May 31, 2019, at 9:50 PM, Sanjay Govindjee via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Matt,
Here is the process as it currently stands:
1) I have a PETSc Vec (sol), which come from a KSPSolve
2) Each processor grabs its section of sol via VecG
ut using MPI_IRecv as it will require a bit
of rewriting since right now I process the received
> data sequentially after each blocking MPI_Recv -- clearly slower
but easier to code.
>
> Thanks again for the help.
>
> -sanjay
>
> On 5/30/1
I want its values
sent in a complex but computable way to local vectors on each process.
-sanjay
On 5/31/19 3:37 AM, Matthew Knepley wrote:
On Thu, May 30, 2019 at 11:55 PM Sanjay Govindjee via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Hi Juanchao,
Thanks for the hints
8:41 PM, Zhang, Junchao wrote:
Hi, Sanjay,
Could you send your modified data exchange code (psetb.F) with
MPI_Waitall? See other inlined comments below. Thanks.
On Thu, May 30, 2019 at 1:49 PM Sanjay Govindjee via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
Lawrence,
rewriting
since right now I process the received
data sequentially after each blocking MPI_Recv -- clearly slower but easier to
code.
Thanks again for the help.
-sanjay
On 5/30/19 4:48 AM, Lawrence Mitchell wrote:
Hi Sanjay,
On 30 May 2019, at 08:58, Sanjay Govindjee via petsc-users
wrote:
stood you can go back to the question of
the memory management of the other solvers
Barry
On May 29, 2019, at 11:51 PM, Sanjay Govindjee via petsc-users
wrote:
I am trying to track down a memory issue with my code; apologies in advance for
the longish message.
I am solving a FEA probl
I am trying to track down a memory issue with my code; apologies in
advance for the longish message.
I am solving a FEA problem with a number of load steps involving about 3000
right hand side and tangent assemblies and solves. The program is
mainly Fortran, with a C memory allocator.
When I
(In Fortran) do the calls
call PetscMallocGetCurrentUsage(val, ierr)
call PetscMemoryGetCurrentUsage(val, ierr)
return the per process memory numbers? or are the returned values summed
across all processes?
-sanjay
g deeper.
>
> -sanjay
> On 5/14/19 5:22 PM, Mark Adams wrote:
>> I would hope you get full precision. How many digits are you
seeing?
>>
>> On Tue, May 14, 2019 at 7:15 PM Sanjay Govindjee via
petsc-users mailto:petsc-users@mcs.anl.gov>
>> >
>> > On Tue, May 14, 2019 at 8:34 PM Sanjay Govindjee wrote:
>> > I'm seeing half precision on at least 10 to 20% of the entries :(
>> > Knowing I should see full precision, I will dig deeper.
>> >
>> > -sanjay
>> > On 5/14
I'm seeing half precision on at least 10 to 20% of the entries :(
Knowing I should see full precision, I will dig deeper.
-sanjay
On 5/14/19 5:22 PM, Mark Adams wrote:
I would hope you get full precision. How many digits are you seeing?
On Tue, May 14, 2019 at 7:15 PM Sanjay Govindje
I am using the following bit of code to debug a matrix. What is the
expected precision of the numbers that I will find in my ASCII file?
As far as I can tell it is not the full double precision that I was
expecting.
call PetscViewerASCIIOpen(PETSC_COMM_WORLD,
tangview,K_view, ierr
to PetscMemoryView()
and -memory_view
--Junchao Zhang
On Tue, May 7, 2019 at 6:56 PM Sanjay Govindjee via petsc-users
mailto:petsc-users@mcs.anl.gov>> wrote:
I was trying to clean up some old scripts we have for running our
codes
which include the command line option -memor
I was trying to clean up some old scripts we have for running our codes
which include the command line option -memory_info.
I went digging in the manuals to try and figure out what this used to do
and what has replaced its functionality but I wasn't able
to figure it out. Does anyone recall the
Throws the proper error on my machine:
$ ~/petsc-3.10.1/intel/bin/mpirun -np 2 ex6
[0]PETSC ERROR: - Error Message
--
[0]PETSC ERROR: This is a uniprocessor example only!
On 11/17/18 1:51 PM, Fazlul Huq via petsc-u
49 matches
Mail list logo