ARSE support in PETSc is really in bad
> shape and mostly untested, with coding solutions that are probably outdated
> now.
> I'll see what I can do to fix the class if I have time in the next weeks.
>
> Stefano
>
> Il giorno mer 15 apr 2020 alle ore 17:21 Mark Adams ha
>
We have a problem when going from 32K to 64K cores on Cori-haswell.
Does Anyone have any thoughts?
Thanks,
Mark
-- Forwarded message -
From: David Trebotich
Date: Wed, Apr 15, 2020 at 4:20 PM
Subject: Re: petsc on Cori Haswell
To: Mark Adams
Hey Mark-
I am running into some
Whoops, this is actually Cori-KNL.
On Wed, Apr 15, 2020 at 4:33 PM Mark Adams wrote:
> We have a problem when going from 32K to 64K cores on Cori-haswell.
> Does Anyone have any thoughts?
> Thanks,
> Mark
>
> -- Forwarded message -
> From: David Trebotich
How does one use SuperLU with GPUs. I don't seem to get any GPU performance
data so I assume GPUs are not getting turned on. Am I wrong about that?
I configure with:
configure options: --with-fc=0 --COPTFLAGS="-g -O2 -fPIC -fopenmp"
--CXXOPTFLAGS="-g -O2 -fPIC -fopenmp" --FOPTFLAGS="-g -O2 -fPIC -
Also, can anyone recommend a highly scalable test problem that Treb can run
with 64K cores?
Thanks,
On Wed, Apr 15, 2020 at 4:41 PM Mark Adams wrote:
> Whoops, this is actually Cori-KNL.
>
> On Wed, Apr 15, 2020 at 4:33 PM Mark Adams wrote:
>
>> We have a problem when goi
t;
> cudaGetDeviceCount(&devCount);
> printf( "CUDA Devices: \n \n");
>
> <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
>
> Satish
>
> On Wed, 1
ices:
> >
> > 0 : Quadro T2000 7 5
> > Global memory: 3911 mb
> > Shared memory: 48 kb
> > Constant memory: 64 kb
> > Block registers: 65536
> >
> /home/balay/petsc/src/snes/tutorials
> Possible problem with ex19 running with superlu_dist, d
On Wed, Apr 15, 2020 at 3:18 PM Stefano Zampini
wrote:
>
>
> On Apr 15, 2020, at 10:14 PM, Mark Adams wrote:
>
> Thanks, it looks correct. I am getting memory leaks (appended)
>
> And something horrible is going on with performance:
>
> MatLUFactorNum 130 1.0
You can write your own DMPlexTSComputeRHSFunctionFVM method like this and
give that to the dm.
PetscErrorCode foo(DM dm, PetscReal time, Vec locX, Vec F, void *user)
{
PetscErrorCode ierr;
PetscFunctionBegin;
ierr = DMPlexTSComputeRHSFunctionFVM(dm, time, locX, F, user);CHKERRQ(ierr);
...
/-/blob/master/src/snes/utils/dmplexsnes.c#L1528
>
> What would you want the interface to look like?
>
> Thanks,
>
> Matt
>
> On Fri, Apr 17, 2020 at 7:55 AM Mark Adams wrote:
>
>> You can write your own DMPlexTSComputeRHSFunctionFVM method like this and
>
Line 1528 of dmplexsnes.c ?
On Fri, Apr 17, 2020 at 10:41 AM Matthew Knepley wrote:
> Tried it again and works for me.
>
>Matt
>
> On Fri, Apr 17, 2020 at 10:39 AM Mark Adams wrote:
>
>> Matt, that kink seems messed up.
>>
>> On Fri, Apr 17, 2020
Just to be clear, you seem to be suggesting adding an interface for this
and/or code to clone?
On Fri, Apr 17, 2020 at 11:08 AM Matthew Knepley wrote:
> On Fri, Apr 17, 2020 at 10:58 AM Mark Adams wrote:
>
>> Line 1528 of dmplexsnes.c ?
>>
>
> Yes. This is where we add
others have the data in some for and its more
natural to stuff the data in yourself.
Just my 2c,
Mark
On Fri, Apr 17, 2020 at 1:04 PM Matthew Knepley wrote:
> On Fri, Apr 17, 2020 at 12:52 PM Mark Adams wrote:
>
>> Just to be clear, you seem to be suggesting adding an interface for th
27;--with-fc=mpif90',
'--with-shared-libraries=1',
# '--known-mpi-shared-libraries=1',
'--with-x=0',
'--with-64-bit-indices=0',
'--with-debugging=0',
'PETSC_ARCH=arch-summit-opt-gnu-cuda-omp',
This is to free the L and U data structures at the end of the program.
>
> Sherry
>
> On Sat, Apr 18, 2020 at 7:24 AM Mark Adams wrote:
>
>> Back to SuperLU + GPUs (adding Sherry)
>>
>> I get this error (appended) running 'check', as I said before. I
ere is a crash.
>
> Sherry
>
> On Sat, Apr 18, 2020 at 11:44 AM Mark Adams wrote:
>
>> Sherry, I did rebase with master this week:
>>
>> SuperLU:
>> Version: 5.2.1
>> Includes: -I/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/include
>> L
>
> Sherry
>
> On Sat, Apr 18, 2020 at 4:54 PM Mark Adams wrote:
>
>>
>>
>> On Sat, Apr 18, 2020 at 3:05 PM Xiaoye S. Li wrote:
>>
>>> Mark,
>>>
>>> It seems you are talking about serial superlu? There is no GPU support
>>>
I/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/include
> > > Library:
> -Wl,-rpath,/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib
> > > -L/ccs/home/adams/petsc/arch-summit-opt-gnu-cuda-omp/lib -lsuperlu
> > >
> > > which is serial superlu, not supe
uperlu' - instead of 'superlu_dist' - hence this
> error.
>
> Satish
>
> On Sun, 19 Apr 2020, Mark Adams wrote:
>
> > >
> > >
> > >
> > > > > --download-superlu --download-superlu_dist
> > >
> > > You a
On Mon, May 4, 2020 at 10:24 AM Sajid Ali
wrote:
> Hi PETSc-developers,
>
> For a linear TS, when solving with -ts_type cn -ksp_type fgmres -pc_type
> gamg, the flag -pc_gamg_reuse_interpolation can be used to re-use the
> GAMG interpolation calculated when solving the first time step. Increasing
My code seems tob running correctly with threads but I get this error in
PetscFinalize.
I Looked at this in DDT and got an error in free here:
PetscErrorCode PetscStackDestroy(void)
{
if (PetscStackActive()) {
free(petscstack);
petscstack = NULL;
}
return 0;
}
This error did not ha
wrote:
> Ok - this is on your Mac.
>
> >>>>>
> Executing: /usr/local/Cellar/mpich/3.3.2/bin/mpicc --version
> stdout:
> Apple clang version 11.0.3 (clang-1103.0.32.59)
> <<<<<
>
> XCode clang does not support OpenMP as far as I know.
&g
I found a way to get an error in my code so nevermind.
On Tue, May 5, 2020 at 6:17 PM Mark Adams wrote:
> My code seems tob running correctly with threads but I get this error in
> PetscFinalize.
>
> I Looked at this in DDT and got an error in free here:
>
> PetscErrorCode
Thanks, the file system on SUMMIT (home directories) has problems. I moved
to the scratch (working) directories and it seems fine.
On Mon, May 18, 2020 at 4:31 PM Matthew Knepley wrote:
> It almost looks like your HDF5 tarball is incomplete.
>
>Matt
>
> On Mon, May 18, 2020
Is there a Mat AIJSeq method For MatSetValuesBlocked,
like matsetvaluesblocked4_ that is not hardwired for bs=4?
>
>>
>> Matt: Could I use BAIJ with Plex?
>>
>
> Plex does this automatically if you have blocks.
>
I use DMCreateMatrix with forest or plex and I seem to get AIJ matrices.
Where does Plex get the block size?
I have not yet verified that bs is set in this matrix.
>
>
>> I think I may know what your problem is. Plex evaluates the blocksize by
> looking for an equal number of dofs
> on each point. This is sufficient, but not necessary. If you are using
> higher order methods, there is block structure
> there that I will not see.
>
I don't understand what the
I have DM_BOUNDARY_NONE.
On Wed, May 27, 2020 at 7:22 PM Jed Brown wrote:
> Matthew Knepley writes:
>
> > On Wed, May 27, 2020 at 7:09 PM Mark Adams wrote:
> >
> >>
> >>
> >>>>
> >>>> Matt: Could I use BAIJ with Plex?
On Wed, May 27, 2020 at 7:37 PM Matthew Knepley wrote:
> On Wed, May 27, 2020 at 7:34 PM Jed Brown wrote:
>
>> Mark Adams writes:
>>
>> > Nvidias's NSight with 2D Q3 and bs=10. (attached).
>>
>> Thanks; this is basically the same as a CPU -- the
>
>
> Note that some DMPlex stuff might run faster if you just make one field
> with num_species components instead of num_species fields with one
> component each. It'll also make the block structure more exploitable.
>
Humm, I assumed fields should be vectors (perhaps 0D for a scalar), but
mayb
>
>
> >
> > *if (rp[low] == col) high = low+1;else * if (rp[t] > col) high = t;
> > else low = t;
>
> Replacing a single comparison per bsearch iteration with two doesn't
> seem like a good choice to me.
>
>
It forces the bisection search and the linear search to terminate in one
iter
On Thu, Jun 4, 2020 at 6:41 PM Sanjay Govindjee wrote:
> thanks, that did it. now I am wondering about all the other
> PETSC_NULL_INTEGER instances in
> my code. how should I be thinking about PETSC_NULL_INTEGER
I would think it's an integer array/pointer in C.
> and
> PETSC_DEFAULT_INTEGER
>
>
>>
>> Would we instead just have 40 (or perhaps slightly fewer) MPI processes
>> all sharing the GPUs? Surely this would be inefficient, and would PETSc
>> distribute the work across all 4 GPUs, or would every process end out using
>> a single GPU?
>>
> See
> https://docs.olcf.ornl.gov/systems/
On Fri, Jun 12, 2020 at 12:56 PM Qin Lu via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> Hello,
>
> I plan to solve a small sparse linear equation system using the direct
> solver, since the number of unknowns is small (less than 1000). Here I got
> a few questions:
>
> 1. Is there a general gu
On Fri, Jun 12, 2020 at 1:18 PM Matthew Knepley wrote:
> On Fri, Jun 12, 2020 at 12:49 PM Qin Lu via petsc-users <
> petsc-users@mcs.anl.gov> wrote:
>
>> Hello,
>>
>> I plan to solve a small sparse linear equation system using the direct
>> solver, since the number of unknowns is small (less than
That is odd. Are these problems symmetric positive definite?
Eigen estimates are a pain in practice but I've never seen this. Hypre has
(better) smoothers that don't need this and the AMG algorithm does not need
them either. I think ML does pretty much the same thing as me.
If SPD then you defini
>
>
> > If SPD then you definitely want '-pc_gamg_esteig_ksp_type cg'. CG
> converges
> > faster and is more robust.
>
> Mark, this comes up a lot. Should we select it by default if MAT_SPD is
> set?
>
I didn't know about MAT_SPD. Yea, I'll add it.
On Sat, Jun 13, 2020 at 1:13 PM Mark Adams wrote:
>
>> > If SPD then you definitely want '-pc_gamg_esteig_ksp_type cg'. CG
>> converges
>> > faster and is more robust.
>>
>> Mark, this comes up a lot. Should we select it by default if MAT_SPD i
So you get this noise with a regular grid in p4est. So the same grid as
will Plex, and you are not getting the same results.
I don't know of any difference from p4est on a non-adapted grid. Can you
reproduce this with ex11?
Matt and Toby could answer this better.
On Wed, Jun 17, 2020 at 1:33 PM
ug for me to not ask this question.
>>
>> Thanks,
>> Dave
>>
>>
>> Maybe this paints a better picture.
>>>
>>> Regards,
>>>
>>> Mukkund
>>>
>>> For your reference, the Riemann Solver is a modified version of the HLL
>>
I am trying to configure the GPU nodes at NERSC and the instructions are
pretty slim.
Any idea what is wrong here?
Thanks,
Mark
configure.log
Description: Binary data
I would put a print statement in your point method (Riemann solver) and
print all the incoming values, face center, normal, L/R states, etc. Then
'sort' the output, because the ordering will be different, and 'diff' the
good and bad runs. See what changes.
>
; Please try the branch *barry/2020-06-18/filter-nersc-cray/maint and
> DO NOT set the variable **CRAY_CPU_TARGET (to test the branch, if it
> works it will be put into maint)*
>
> * Thanks*
>
> * Barry*
>
>
> On Jun 18, 2020, at 3:57 PM, Mark Adams wrote:
>
&
I don't know what is going on here. There was an update to this function
about a year ago, so that might fix your problem.
We would need you to test with a current version.
Mark
On Fri, Jun 19, 2020 at 11:23 AM Eda Oktay wrote:
> Hi all,
>
> I am trying to find off block diagonal entries of a
3 x 72, 3 x 72
> [2]PETSC ERROR: See
> https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble
> shooting.
> [2]PETSC ERROR: Petsc Release Version 3.13.2, Jun 02, 2020
> [2]PETSC ERROR:
> ./approx_cut_deneme_clustering_son_final_edgecut_without_parmetis on a
> arch-linux2-
My code runs OK on SUMMIT but ex56 does not. ex56 runs in serial but I get
this segv in parallel. I also see these memcheck messages
from PetscSFBcastAndOpBegin in my code and ex56.
I ran this in DDT and was able to get a stack trace and look at variables.
THe segv is on sfbasic.c:148:
ierr =
MPI
On Tue, Jun 23, 2020 at 9:54 AM Jed Brown wrote:
> Did you use --smpiargs=-gpu, and have you tried if the error is still
>
> there with -use_gpu_aware_mpi 0?
That did it. Thanks,
> I assume you're using -dm_vec_type cuda?
>
> Mark Adams writes:
>
> > My
d the test. Hoping it fixes the pipeline ex56/cuda diffs.
On Tue, Jun 23, 2020 at 11:07 AM Stefano Zampini
wrote:
> What did it? How are you running now to have everything working? Can you
> post smpiargs and petsc options?
>
> Il giorno mar 23 giu 2020 alle ore 17:51 Mark Ad
You want to make an MR (check to delete branch after merge and squash
commits), you can mark as WIP, run the pipeline and after you get it to run
clean and merge any comments, umark WIP. If it does not get merged in a few
days you could ask Satish.
Mark
On Tue, Jun 30, 2020 at 5:12 PM MUKKUND SUNJ
As Matt said AMG does not work for everything out-of-the-box.
Hypre is pretty robust and I am surprised that every option gives
INFs/NANs. But if it solves your stretched Lapacian then it's probably
hooked up correctly.
You might try a non-adjoint Navier-Stokes problem, if you can.
You could try
KSPSetType(levelSmoother, KSPFGMRES);
> }
>
> Thank you very much!
> Best regards,
> Andrea
>
> --
> *From:* Mark Adams
> *Sent:* Wednesday, July 1, 2020 2:15 PM
> *To:* Matthew Knepley
> *Cc:* Andrea Iob ; petsc-users@mcs.anl.gov <
> pe
>
>
>
> 3) Schwarz-smoothers
>
> #0 0x750897f4 in dtrsm_ () from
> /opt/lapack/3.8.0-gcc.7.4.0/lib64/libblas.so.3
> #1 0x7550924b in dpotrs_ () from
> /opt/lapack/3.8.0-gcc.7.4.0/lib64/liblapack.so.3
>
This seems to be Cholesky ... which would not work for you if you are not
symm
ISCreateGeneral just takes your indices and caches them. But it is a synch
point. What ratio (max/min) is PETSc reporting in this data?
On Sun, Jul 5, 2020 at 1:51 PM Y. Shidi wrote:
> Dear developers,
>
> I am currently doing a weak scaling test, and find that
> the weak scaling results for ISC
>
> Kind regards,
> Shidi
>
> On 2020-07-05 20:15, Mark Adams wrote:
> > ISCreateGeneral just takes your indices and caches them. But it is a
> > synch point. What ratio (max/min) is PETSc reporting in this data?
> >
> > On Sun, Jul 5, 2020 at 1:51 PM Y. Shidi
On Wed, Jul 8, 2020 at 2:43 PM Qin Lu via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> Hello,
>
> I am using the Petsc native direct solver (with KSPPREONLY and PCLU) to
> solver a small Linear equation system. The matrix has a couple of diagonal
> terms with value 0, which makes the solver fai
27;m not sure if those setting are appropriate from my system.
>
> Best regards,
> Andrea
>
>
> --
> *From:* Andrea Iob
> *Sent:* Thursday, July 9, 2020 2:55 PM
> *To:* Mark Adams
> *Cc:* Barry Smith ; Matthew Knepley ;
> petsc-users@m
>
>
>
> I would probably move out of VTK files in favor of something else if I had
> a way to encode VTK's (the library, not the file format) high-order
> Lagrange elements.
> Actually, I'm toying with dumping files with PETSc's raw binary I/O with
> MPI, and writing a proper ParaView plugin in Pyt
You do not have write permissions to the
> --prefix directory
> /global/common/software/m1041/petsc_install/petsc_knl_intel
> You will be prompted for the sudo password
> for any external package installs
>
> ======
Fande,
do you know if your 45226154 was out of range in the real matrix?
What size integers do you use?
Thanks,
Mark
On Mon, Jul 20, 2020 at 1:17 AM Fande Kong wrote:
> Trace could look like this:
>
> [640]PETSC ERROR: - Error Message
> --
If you search for (-i) "boundary" in ts/ex11.c you will see several
examples of setting BCs with ghost cells in FV.
Don't worry about hyperbolic per se. The PETSc interface is very abstract:
G(x,xdot,t) = F(x,t), or something like that. You can decide which side of
the equation to put each of your
e:
> Hi Mark,
>
> Thanks for your reply.
>
> On Mon, Jul 20, 2020 at 7:13 AM Mark Adams wrote:
>
>> Fande,
>> do you know if your 45226154 was out of range in the real matrix?
>>
>
> I do not know since it was in building the AMG hierarchy. The size of
On Mon, Jul 20, 2020 at 2:36 PM Fande Kong wrote:
> Hi Mark,
>
> Just to be clear, I do not think it is related to GAMG or PtAP. It is a
> communication issue:
>
Youe stack trace was from PtAP, but Chris's problem is not.
>
> Reran the same code, and I just got :
>
> [252]PETSC ERROR:
ome of the memory of static pointers gets corrupted, although I would
> expect a garbage number and not something that could possibly make sense.
>
> *Chris Hewson*
> Senior Reservoir Simulation Engineer
> ResFrac
> +1.587.575.9792
>
>
> On Mon, Jul 20, 2020 at 12:41 PM Ma
On Tue, Jul 21, 2020 at 9:46 AM Matthew Knepley wrote:
> On Tue, Jul 21, 2020 at 9:35 AM Pierpaolo Minelli <
> pierpaolo.mine...@cnr.it> wrote:
>
>> Thanks for your reply.
>> As I wrote before, I use these settings:
>>
>> -dm_mat_type hypre -pc_type hypre -pc_hypre_type boomeramg
>> -pc_hypre_boo
This also looks like it could be some sort of library mismatch. You might
try deleting your architecture directory and start over. This PETSc's "make
realclean"
On Tue, Jul 21, 2020 at 10:45 AM Stefano Zampini
wrote:
>
>
> On Jul 21, 2020, at 1:32 PM, Pierpaolo Minelli
> wrote:
>
> Hi,
>
> I ha
On Tue, Jul 21, 2020 at 12:06 PM Pierpaolo Minelli
wrote:
>
>
> Il giorno 21 lug 2020, alle ore 16:56, Mark Adams ha
> scritto:
>
>
>
> On Tue, Jul 21, 2020 at 9:46 AM Matthew Knepley wrote:
>
>> On Tue, Jul 21, 2020 at 9:35 AM Pierpaolo Minelli <
On Tue, Jul 21, 2020 at 12:11 PM Pierpaolo Minelli
wrote:
>
>
> Il giorno 21 lug 2020, alle ore 16:58, Mark Adams ha
> scritto:
>
> This also looks like it could be some sort of library mismatch. You might
> try deleting your architecture directory and start over. This PETS
KSPSetOperator tells the KSP that the PC should be resetup.
On Tue, Jul 21, 2020 at 3:45 PM Alex Fleeter
wrote:
> Hi:
>
> I want to ask under what circumstance will trigger a call for pc setup.
>
> I call KSPSolve to solve with the same Mat object with different entry
> values each time. I can s
On Wed, Jul 22, 2020 at 8:05 AM Adolfo Rodriguez wrote:
> I am trying to replace the non-linear solver in a flow simulation problem
> where the matrix sparsity can change during the iterations. I tried
> successfully to create the matrix within the FormJacobian function but I
> have a memory leak
I get all the coordinates with this method:
static PetscErrorCode crd_func(PetscInt dim, PetscReal time, const
PetscReal x[], PetscInt Nf_dummy, PetscScalar *u, void *actx)
{
int i;
PetscFunctionBeginUser;
for (i = 0; i < dim; ++i) u[i] = x[i];
PetscFunctionReturn(0);
}
PetscErrorCode (
_vec, cell, NULL,
&coef);CHKERRQ(ierr);
ierr = DMPlexVecRestoreClosure(plex, section, crd_vec, cell, NULL,
&coef);CHKERRQ(ierr);
On Sat, Jul 25, 2020 at 7:13 AM Mark Adams wrote:
> I get all the coordinates with this method:
>
> static PetscErrorCode crd_func(PetscInt dim,
CreateDS(crddm);CHKERRV(ierr);
ierr = PetscFEDestroy(&fe);CHKERRV(ierr);
On Sat, Jul 25, 2020 at 7:40 AM Stefano Zampini
wrote:
> Mark
>
> This will only work if you have a vector space for the function
>
>
> On Jul 25, 2020, at 1:13 PM, Mark Adams wrote:
>
> I ge
On Sat, Jul 25, 2020 at 9:08 AM Mark Adams wrote:
> Yea, I did not get all the code you need. Here is an example of making
> crddm. I'm not sure if this is all best practices (Matt?)
>
> /* create coordinate DM */
> ierr = DMClone(dm, &crddm);CHKERRV(ierr);
>
You can add your own convergence test and do anything that you want:
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESSetConvergenceTest.html
On Tue, Jul 28, 2020 at 11:44 AM Alexander Lindsay
wrote:
> To help debug the many emails we get about solves that fail to converge,
You can also do this with a monitor and get the converged reason to do what
you want:
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/SNES/SNESMonitorSet.html#SNESMonitorSet
On Tue, Jul 28, 2020 at 12:09 PM Mark Adams wrote:
> You can add your own convergence test and do anyth
I suspect that the Poisson and Ampere's law solve are not coupled. You
might be able to duplicate the communicator and use two threads. You would
want to configure PETSc with threadsafty and threads and I think it
could/should work, but this mode is never used by anyone.
That said, I would not rec
e the communicator to have them both run
concurrently.
> Did anyone ever tried to run 2 solvers with hyperthreading?
> Thanks
>
>
> Il giorno dom 2 ago 2020 alle ore 14:09 Mark Adams ha
> scritto:
>
>> I suspect that the Poisson and Ampere's law solve are not coupled.
on different subcommunicator, however this would involve more communication.
> Did anyone ever tried to run 2 solvers with hyperthreading?
> Thanks
>
>
> Il giorno dom 2 ago 2020 alle ore 14:09 Mark Adams ha
> scritto:
>
>> I suspect that the Poisson and Ampere'
; -poisson_mg_levels_ksp_type chebyshev
> -poisson_mg_levels_pc_type jacobi
> -poisson_pc_gamg_agg_nsmooths 1
> -poisson_pc_gamg_coarse_eq_limit 10
> -poisson_pc_gamg_reuse_interpolation true
> -poisson_pc_gamg_square_graph 1
> -poisson_pc_gamg_threshold 0.05
> -poisson_pc_gamg_threshold
ould be to just put a print statement in the code
>>> before the LAPACK call of if they are called many times add an error check
>>> like that
>>> generates an error if any of these three values are 0 (or negative).
>>>
>>>Barry
>>>
>
On Sun, Aug 16, 2020 at 12:26 PM Nidish wrote:
> Well some of the zero eigenvectors are rigid body modes, but there are
> some more which are introduced by lagrange-multiplier based constraint
> enforcement, which are non trivial.
>
If you want the null space for AMG solvers, they do not deal wi
t; from
> /scratch/snx3000/nvarini/petsc-3.13.3/src/vec/vec/impls/hypre/vhyp.c:7:
> /opt/nvidia/cudatoolkit10/10.1.105_3.27-7.0.1.1_4.1__ga311ce7/include/thrust/detail/config/cpp_compatibility.h:21:10:
> fatal error: cstddef: No such file or directory
> #include
>
-01 1.0 3.64e+07 1.0 2.7e+03 4.6e+03
> 3.0e+02 0 0 0 0 1 0 0 0 0 1 784
> GAMG: partLevel 4 1.0 1.1628e-01 1.0 5.90e+06 1.0 1.1e+03 3.0e+03
> 1.6e+02 0 0 0 0 1 0 0 0 0 1 604
> ==
> Nicola
>
>
> Il giorno lun 17 ago 2020 alle or
_ksp_type
> *gmres*
> but it still fails to converge.
> For the time being it seems that hypre on CPU is the safest choice,
> although it is surely worth experimenting with Stefano branch.
>
> Thanks,
>
> Nicola
>
> Il giorno lun 17 ago 2020 alle ore 18:10 Mark Adams
>
>
>
>
> So, if ParMETIS gives different edge cut as it is expected,
> MatPartitioningGetUseEdgeWeights and MatPartitioningSetUseEdgeWeights works
> correctly. Why can't CHACO?
>
>>
>>
Chaco does not support using edge weights.
>
>
>
>
> Does MPI work fine on this box?
It has, but I don't use it much here lately.
> You can try disabling this check (manually) - and do the build, and run
>
> Does MPI run fine?
>
>
I will look. I got going by disabling MPI.
> Satish
>
>
> >>>
>
> diff --git a/config/BuildSystem
at 5:31 PM Satish Balay via petsc-users <
> > > petsc-users@mcs.anl.gov> wrote:
> > >
> > > > On Thu, 17 Sep 2020, Mark Adams wrote:
> > > >
>
Sep 17, 2020 at 5:31 PM Satish Balay via petsc-users <
> > > petsc-users@mcs.anl.gov> wrote:
> > >
> > > > On Thu, 17 Sep 2020, Mark Adams wrote:
> > > >
>
ils?
>
> So this is a false positive for you? [i.e the configure tests fails but
> MPI works?]
>
Let me know if you have more questions.
>
> Satish
>
> On Thu, 17 Sep 2020, Mark Adams wrote:
>
> > On Thu, Sep 17, 2020 at 6:00 PM Satish Balay wrote:
> >
&
Oh you did not change my hostname:
07:37 master *= ~/Codes/petsc$ hostname
MarksMac-302.local
07:41 master *= ~/Codes/petsc$ ping -c 2 MarksMac-302.local
PING marksmac-302.local (127.0.0.1): 56 data bytes
Request timeout for icmp_seq 0
--- marksmac-302.local ping statistics ---
2 packets transmit
On Fri, Sep 18, 2020 at 7:51 AM Matthew Knepley wrote:
> On Fri, Sep 18, 2020 at 7:46 AM Mark Adams wrote:
>
>> Oh you did not change my hostname:
>>
>> 07:37 master *= ~/Codes/petsc$ hostname
>> MarksMac-302.local
>> 07:41 master *= ~/Codes/petsc$ ping -c 2
ving differently.
>
> My theory is that if ping -c 2 `hostname` fails then MPICH and OpenMP
> mpiexec -n 2 will fail. We need to determine if this theory is correct or
> if you have a counter-example.
>
>
> On Sep 18, 2020, at 8:09 AM, Mark Adams wrote:
>
>
>
> On Fr
i.e:
> >
> > 127.0.0.1 MarksMac-302.local
> >
> > And now rerun MPI tests. Do they work or fail?
> >
> > [this is to check if this test is a false positive on your machine]
> >
> > Satish
> >
> >
> > On Fri, 18 Sep 2020, Mark Adams wro
Let me know if you want anything else.
Thanks,
Mark
On Fri, Sep 18, 2020 at 11:05 AM Mark Adams wrote:
>
>
> On Fri, Sep 18, 2020 at 11:04 AM Satish Balay wrote:
>
>> On Fri, 18 Sep 2020, Satish Balay via petsc-users wrote:
>>
>> > > >> 07:41 maste
ps max, 52 byte packets
1 localhost (127.0.0.1) 0.322 ms 0.057 ms 0.032 ms
12:12 adams/plex-noprealloc-fix=
~/Codes/petsc/src/ts/utils/dmplexlandau/tutorials$
>
>
> On Sep 18, 2020, at 10:07 AM, Mark Adams wrote:
>
> Let me know if you want anything else.
> Thanks,
> Mark
> a try...
>
> Best regards,
>
> Jacob Faibussowitsch
> (Jacob Fai - booss - oh - vitch)
> Cell: (312) 694-3391
>
> On Sep 18, 2020, at 11:08, Barry Smith wrote:
>
>
>try
>
> /usr/sbin/traceroute `hostname`
>
>
> On Sep 18, 2020, at 10:07 AM,
As Jed said high frequency is hard. AMG, as-is, can be adapted (
https://link.springer.com/article/10.1007/s00466-006-0047-8) with
parameters.
AMG for convection: use richardson/sor and not chebyshev smoothers and in
smoothed aggregation (gamg) don't smooth (-pc_gamg_agg_nsmooths 0).
Mark
On Sat,
Generally you do want to use FieldSplit but Vanka might work.
I'm not sure what to use as the smoother KSP. -mg_levels_ksp_type [gmres |
richardson]. If you use richardson you will probably want to fiddle with
the damping.
On Tue, Sep 22, 2020 at 11:23 PM Barry Smith wrote:
>
> Raju,
>
>
>
>
> Is there a way to avoid the explicit transposition of the matrix?
>
It does not look like we have A*B^T for mpiaij as the error message says. I
am not finding it in the code.
Note, MatMatMult with a transpose shell matrix, I suspect that it does an
explicit transpose internally, or it could
This communication is all in PCApply. What -pc_type are you using?
It looks like -pc_type ssor (or is it sor). That is not implemented on the
GPU. You can use 'jacobi'
On Thu, Sep 24, 2020 at 11:08 AM Zhang, Chonglin wrote:
> Dear PETSc Users,
>
> I have some questions regarding the proper GPU
1 - 100 of 1059 matches
Mail list logo