Hi,
I've been debugging and removing some errors. Now the code works on most
cluster nodes but fails on 2 of them. The strange thing is that I'm
using the same gnu compiler but only deploying to the newer setup nodes.
The newer nodes work when using my old code, which is similar except
that
It appears that you have an uninitialized variable (or more than one). When
compiled with debugging, variables
are normally initialized to zero.
Thanks,
Matt
On Fri, Dec 25, 2015 at 5:41 AM, TAY wee-beng wrote:
> Hi,
>
> Sorry, there seems to be some problems with my valgrind. I have re
Hi,
I just realised that the nodes which I tested on may have some problems,
as it has just been setup. So there's problem in the MPI communication.
I now do my test on the old nodes.
I have reduced my problem to a serial one. The code works fine with the
debug version. But for the optimized
> On Dec 24, 2015, at 10:37 PM, TAY wee-beng wrote:
>
> Hi,
>
> I tried valgrind in MPI but it aborts very early, with the error msg
> regarding PETSc initialize.
It shouldn't "abort" it should print some error message and continue. Please
send all the output when running with valgrind.
Hi,
I tried valgrind in MPI but it aborts very early, with the error msg
regarding PETSc initialize.
I retry again, using a lower resolution.
GAMG works, but BoomerAMG and hypre doesn't. Increasing cpu too high
(80) also cause it to hang. 60 works fine.
My grid size is 98x169x169
But whe
It sounds like you have memory corruption in a different part of the code.
Run in valgrind.
Matt
On Thu, Dec 24, 2015 at 10:14 AM, TAY wee-beng wrote:
> Hi,
>
> I have this strange error. I converted my CFD code from a z directon only
> partition to the yz direction partition. The code works
Hi,
I have this strange error. I converted my CFD code from a z directon
only partition to the yz direction partition. The code works fine but
when I increase the cpu no, strange things happen when solving the
Poisson eqn.
I increase cpu no from 24 to 40.
Sometimes it works, sometimes it do