It's an option that he would have set explicitly via MatSetOption, following
Lawrence's suggestion. He can either not call that function or use PETSC_FALSE
to unset it.
Junchao Zhang writes:
> Is there an option to turn off MAT_SUBSET_OFF_PROC_ENTRIES for Mohammad to
> try?
>
> --Junchao Zhang
Is there an option to turn off MAT_SUBSET_OFF_PROC_ENTRIES for Mohammad to
try?
--Junchao Zhang
On Sun, Mar 28, 2021 at 3:34 PM Jed Brown wrote:
> I take it this was using MAT_SUBSET_OFF_PROC_ENTRIES. I implemented that
> to help performance of PHASTA and other applications that assemble matri
I take it this was using MAT_SUBSET_OFF_PROC_ENTRIES. I implemented that to
help performance of PHASTA and other applications that assemble matrices that
are relatively cheap to solve (so assembly cost is significant compared to
preconditioner setup and KSPSolve) and I'm glad it helps so much he
Here is the plot of run time in old and new petsc using 1,2,4,8, and 16
CPUs (in logarithmic scale):
[image: Screenshot from 2021-03-28 10-48-56.png]
On Thu, Mar 25, 2021 at 12:51 PM Mohammad Gohardoust
wrote:
> That's right, these loops also take roughly half time as well. If I am not
> mis
Thanks Lawrence for your suggestion. It did work and the BuildTwoSided time
is almost zero now:
BuildTwoSided 4 1.0 1.6279e-0310.1 0.00e+00 0.0 2.0e+02 4.0e+00 4.0e+00
BuildTwoSidedF 3 1.0 1.5969e-0310.5 0.00e+00 0.0 2.0e+02 3.6e+03 3.0e+00
Mohammad
On Thu, Mar 25, 2021 at 4:54 AM Lawrence Mitch
That's right, these loops also take roughly half time as well. If I am not
mistaken, petsc (MatSetValue) is called after doing some calculations over
each tetrahedral element.
Thanks for your suggestion. I will try that and will post the results.
Mohammad
On Wed, Mar 24, 2021 at 3:23 PM Junchao Z
> On 24 Mar 2021, at 01:30, Matthew Knepley wrote:
>
> This is true, but all the PETSc operations are speeding up by a factor 2x. It
> is hard to believe these were run on the same machine.
> For example, VecScale speeds up!?! So it is not network, or optimizations. I
> cannot explain this.
On Wed, Mar 24, 2021 at 2:17 AM Mohammad Gohardoust
wrote:
> So the code itself is a finite-element scheme and in stage 1 and 3 there
> are expensive loops over entire mesh elements which consume a lot of time.
>
So these expensive loops must also take half time with newer petsc? And
these loops
I built petsc 3.13.4 and got results similar to the old ones. I am
attaching the log-view output file here.
Mohammad
On Tue, Mar 23, 2021 at 6:49 PM Satish Balay via petsc-users <
petsc-users@mcs.anl.gov> wrote:
> On Tue, 23 Mar 2021, Matthew Knepley wrote:
>
> > On Tue, Mar 23, 2021 at 9:08 PM
So the code itself is a finite-element scheme and in stage 1 and 3 there
are expensive loops over entire mesh elements which consume a lot of time.
Mohammad
On Tue, Mar 23, 2021 at 6:08 PM Junchao Zhang
wrote:
> In the new log, I saw
>
> Summary of Stages: - Time -- - Flop --
On Tue, 23 Mar 2021, Matthew Knepley wrote:
> On Tue, Mar 23, 2021 at 9:08 PM Junchao Zhang
> wrote:
>
> > In the new log, I saw
> >
> > Summary of Stages: - Time -- - Flop -- --- Messages ---
> > -- Message Lengths -- -- Reductions --
> > Avg %
On Tue, Mar 23, 2021 at 9:08 PM Junchao Zhang
wrote:
> In the new log, I saw
>
> Summary of Stages: - Time -- - Flop -- --- Messages ---
> -- Message Lengths -- -- Reductions --
> Avg %Total Avg %TotalCount %Total
> Avg
In the new log, I saw
Summary of Stages: - Time -- - Flop -- ---
Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %TotalCount
%Total Avg %TotalCount %Total
0: Main Stage: 5.4095e+00 2.3% 4.37
Thanks Dave for your reply.
For sure PETSc is awesome :D
Yes, in both cases petsc was configured with --with-debugging=0 and
fortunately I do have the old and new -log-veiw outputs which I attached.
Best,
Mohammad
On Tue, Mar 23, 2021 at 1:37 AM Dave May wrote:
> Nice to hear!
> The answer is
Nice to hear!
The answer is simple, PETSc is awesome :)
Jokes aside, assuming both petsc builds were configured with
—with-debugging=0, I don’t think there is a definitive answer to your
question with the information you provided.
It could be as simple as one specific implementation you use was i
Hi,
I am using a code which is based on petsc (and also parmetis). Recently I
made the following changes and now the code is running about two times
faster than before:
- Upgraded Ubuntu 18.04 to 20.04
- Upgraded petsc 3.13.4 to 3.14.5
- This time I installed parmetis and metis directly
16 matches
Mail list logo