Dear Barry:

You were right!! The problem is I am using the background DMDA mesh for the 
domain partitioning of the DMSWarm as in “dm/tutorials/swarm_ex3.c”. And then 
“DMGetNeighbors” to locate the neighbour ranks, including those in the other 
side of the domain when I am using periodic bcc.

Therefore, if I define the background DMDA to use periodic bcc the particle 
domain partitioning is uneven but I can locate precisely the periodic ranks.

Thanks,
Miguel

On 20 Nov 2024, at 23:40, MIGUEL MOLINOS PEREZ <mmoli...@us.es> wrote:

I see… that might be the problem. I’ll check it tomorrow. Thank you!

Miguel

On 20 Nov 2024, at 22:57, Barry Smith <bsm...@petsc.dev> wrote:



On Nov 20, 2024, at 2:38 PM, MIGUEL MOLINOS PEREZ <mmoli...@us.es> wrote:

Yes, I use the vertex (nodes) of the elements.

   Then the length between each vertex will be different between periodic and 
non-periodic case. With 10 points and non-periodic, it will be 1/9, and with 
periodic it will be 1/10th. Is this what you are asking about?



I am using the DMDA as an auxiliar mesh to do the domain partitioning in the 
DMSWARM.

Thanks,
Miguel



On 20 Nov 2024, at 19:54, Barry Smith <bsm...@petsc.dev> wrote:


   Are you considering your degrees of freedom as vertex or cell-centered?

   Say three "elements" per edge.

       If vertex centered then discretization size is 1/3 if periodic and 1/2 
if not periodic

       If cell-centered then each cell has width 1/3 for both periodic and not 
periodic

    but in both cases you can think of the discretization size as constant 
along the whole cube edge.

    Is this related to DMSWARM in particular?

On Nov 20, 2024, at 12:56 PM, MIGUEL MOLINOS PEREZ <mmoli...@us.es> wrote:

I mean that if the dimensions of the cube are 1x1x1 (for example). And I want 
10 elements per edge, the discretization size must be 0.1 constant over the 
whole cube edge.

This is not in the code, I just impose the number of elements per edge.

Thank you,
Miguel

On 20 Nov 2024, at 18:52, Barry Smith <bsm...@petsc.dev> wrote:


  What do you mean by discretization size, and how do I see it in the code?

  Barry


On Nov 20, 2024, at 12:48 PM, MIGUEL MOLINOS PEREZ <mmoli...@us.es> wrote:

Sorry, I meant that the discretisation size is not constant across the edges of 
the cube.

Miguel

On 20 Nov 2024, at 18:36, Barry Smith <bsm...@petsc.dev> wrote:


   I am sorry, I don't understand the problem. When I run by default with 
-da_view I get

Processor [0] M 3 N 3 P 3 m 2 n 2 p 2 w 1 s 1
X range of indices: 0 2, Y range of indices: 0 2, Z range of indices: 0 2
Processor [1] M 3 N 3 P 3 m 2 n 2 p 2 w 1 s 1
X range of indices: 2 3, Y range of indices: 0 2, Z range of indices: 0 2
Processor [2] M 3 N 3 P 3 m 2 n 2 p 2 w 1 s 1
X range of indices: 0 2, Y range of indices: 2 3, Z range of indices: 0 2
Processor [3] M 3 N 3 P 3 m 2 n 2 p 2 w 1 s 1
X range of indices: 2 3, Y range of indices: 2 3, Z range of indices: 0 2
Processor [4] M 3 N 3 P 3 m 2 n 2 p 2 w 1 s 1
X range of indices: 0 2, Y range of indices: 0 2, Z range of indices: 2 3
Processor [5] M 3 N 3 P 3 m 2 n 2 p 2 w 1 s 1
X range of indices: 2 3, Y range of indices: 0 2, Z range of indices: 2 3
Processor [6] M 3 N 3 P 3 m 2 n 2 p 2 w 1 s 1
X range of indices: 0 2, Y range of indices: 2 3, Z range of indices: 2 3
Processor [7] M 3 N 3 P 3 m 2 n 2 p 2 w 1 s 1
X range of indices: 2 3, Y range of indices: 2 3, Z range of indices: 2 3

which seems right because you are trying to have three cells in each direction. 
The distribution has to be uneven, hence 0 2 and 2 3

When I change the code to use ndiv_mesh_* = 4 and run with periodic or not I get

$ PETSC_OPTIONS="" mpiexec -n 8 ./atoms-3D -dm_view
DM Object: 8 MPI processes
  type: da
Processor [0] M 4 N 4 P 4 m 2 n 2 p 2 w 1 s 1
X range of indices: 0 2, Y range of indices: 0 2, Z range of indices: 0 2
Processor [1] M 4 N 4 P 4 m 2 n 2 p 2 w 1 s 1
X range of indices: 2 4, Y range of indices: 0 2, Z range of indices: 0 2
Processor [2] M 4 N 4 P 4 m 2 n 2 p 2 w 1 s 1
X range of indices: 0 2, Y range of indices: 2 4, Z range of indices: 0 2
Processor [3] M 4 N 4 P 4 m 2 n 2 p 2 w 1 s 1
X range of indices: 2 4, Y range of indices: 2 4, Z range of indices: 0 2
Processor [4] M 4 N 4 P 4 m 2 n 2 p 2 w 1 s 1
X range of indices: 0 2, Y range of indices: 0 2, Z range of indices: 2 4
Processor [5] M 4 N 4 P 4 m 2 n 2 p 2 w 1 s 1
X range of indices: 2 4, Y range of indices: 0 2, Z range of indices: 2 4
Processor [6] M 4 N 4 P 4 m 2 n 2 p 2 w 1 s 1
X range of indices: 0 2, Y range of indices: 2 4, Z range of indices: 2 4
Processor [7] M 4 N 4 P 4 m 2 n 2 p 2 w 1 s 1
X range of indices: 2 4, Y range of indices: 2 4, Z range of indices: 2 4

so it is splitting as expected each rank gets a 2 by 2 by 2 set of indices.

Could you please let me know what the problem is that I should be seeing.

  Barry


On Nov 20, 2024, at 7:06 AM, MIGUEL MOLINOS PEREZ <mmoli...@us.es> wrote:

Dear Barry,

Please, find attached to this email a minimal example of the problem. Run it 
using 8 MPI processes.

Thanks,
Miguel





On 20 Nov 2024, at 11:48, Miguel Molinos <mmoli...@us.es> wrote:

Hi Bary:

I will check the example you suggest. Anyhow, I’ll send a reproducible example 
ASAP.

Thanks,
Miguel

On 19 Nov 2024, at 18:55, Barry Smith <bsm...@petsc.dev> wrote:


   I modify src/dm/tests/ex25.c and always see a nice even split when possible 
with both DM_BOUNDARY_NONE and DM_BOUNDARY_PERIODIC

    Can you please send a reproducible example?

    Thanks

     Barry


On Nov 19, 2024, at 6:14 AM, MIGUEL MOLINOS PEREZ <mmoli...@us.es> wrote:

Dear all:

It seems that if I mesh a cubic domain with “DMDACreate3d” using 8 bricks for 
discretization and with periodic boundaries, each of the bricks has a different 
size. In contrast, if I use DM_BOUNDARY_NONE, all 8 bricks have the same size.

I have used this together with the DMSWarm discretization. And as you can see 
the number of particles per rank is not evenly distributed:
210 420 366 732 420 840 732 1464

Am I missing something?

Thanks,
Miguel


<Screenshot 2024-11-19 at 10.56.36.png>



<atoms-3D.cpp><Mg-hcp-cube-x17-x10-x10.dump>








Reply via email to