I am sorry, I don't understand the problem. When I run by default with 
-da_view I get 

Processor [0] M 3 N 3 P 3 m 2 n 2 p 2 w 1 s 1
X range of indices: 0 2, Y range of indices: 0 2, Z range of indices: 0 2
Processor [1] M 3 N 3 P 3 m 2 n 2 p 2 w 1 s 1
X range of indices: 2 3, Y range of indices: 0 2, Z range of indices: 0 2
Processor [2] M 3 N 3 P 3 m 2 n 2 p 2 w 1 s 1
X range of indices: 0 2, Y range of indices: 2 3, Z range of indices: 0 2
Processor [3] M 3 N 3 P 3 m 2 n 2 p 2 w 1 s 1
X range of indices: 2 3, Y range of indices: 2 3, Z range of indices: 0 2
Processor [4] M 3 N 3 P 3 m 2 n 2 p 2 w 1 s 1
X range of indices: 0 2, Y range of indices: 0 2, Z range of indices: 2 3
Processor [5] M 3 N 3 P 3 m 2 n 2 p 2 w 1 s 1
X range of indices: 2 3, Y range of indices: 0 2, Z range of indices: 2 3
Processor [6] M 3 N 3 P 3 m 2 n 2 p 2 w 1 s 1
X range of indices: 0 2, Y range of indices: 2 3, Z range of indices: 2 3
Processor [7] M 3 N 3 P 3 m 2 n 2 p 2 w 1 s 1
X range of indices: 2 3, Y range of indices: 2 3, Z range of indices: 2 3

which seems right because you are trying to have three cells in each direction. 
The distribution has to be uneven, hence 0 2 and 2 3

When I change the code to use ndiv_mesh_* = 4 and run with periodic or not I 
get 

$ PETSC_OPTIONS="" mpiexec -n 8 ./atoms-3D -dm_view
DM Object: 8 MPI processes
  type: da
Processor [0] M 4 N 4 P 4 m 2 n 2 p 2 w 1 s 1
X range of indices: 0 2, Y range of indices: 0 2, Z range of indices: 0 2
Processor [1] M 4 N 4 P 4 m 2 n 2 p 2 w 1 s 1
X range of indices: 2 4, Y range of indices: 0 2, Z range of indices: 0 2
Processor [2] M 4 N 4 P 4 m 2 n 2 p 2 w 1 s 1
X range of indices: 0 2, Y range of indices: 2 4, Z range of indices: 0 2
Processor [3] M 4 N 4 P 4 m 2 n 2 p 2 w 1 s 1
X range of indices: 2 4, Y range of indices: 2 4, Z range of indices: 0 2
Processor [4] M 4 N 4 P 4 m 2 n 2 p 2 w 1 s 1
X range of indices: 0 2, Y range of indices: 0 2, Z range of indices: 2 4
Processor [5] M 4 N 4 P 4 m 2 n 2 p 2 w 1 s 1
X range of indices: 2 4, Y range of indices: 0 2, Z range of indices: 2 4
Processor [6] M 4 N 4 P 4 m 2 n 2 p 2 w 1 s 1
X range of indices: 0 2, Y range of indices: 2 4, Z range of indices: 2 4
Processor [7] M 4 N 4 P 4 m 2 n 2 p 2 w 1 s 1
X range of indices: 2 4, Y range of indices: 2 4, Z range of indices: 2 4

so it is splitting as expected each rank gets a 2 by 2 by 2 set of indices.

Could you please let me know what the problem is that I should be seeing.

  Barry


> On Nov 20, 2024, at 7:06 AM, MIGUEL MOLINOS PEREZ <mmoli...@us.es> wrote:
> 
> Dear Barry,
> 
> Please, find attached to this email a minimal example of the problem. Run it 
> using 8 MPI processes. 
> 
> Thanks,
> Miguel
> 
> 
> 
> 
> 
>> On 20 Nov 2024, at 11:48, Miguel Molinos <mmoli...@us.es> wrote:
>> 
>> Hi Bary:
>> 
>> I will check the example you suggest. Anyhow, I’ll send a reproducible 
>> example ASAP.
>> 
>> Thanks,
>> Miguel
>> 
>>> On 19 Nov 2024, at 18:55, Barry Smith <bsm...@petsc.dev> wrote:
>>> 
>>> 
>>>    I modify src/dm/tests/ex25.c and always see a nice even split when 
>>> possible with both DM_BOUNDARY_NONE and DM_BOUNDARY_PERIODIC
>>> 
>>>     Can you please send a reproducible example?
>>> 
>>>     Thanks
>>> 
>>>      Barry
>>> 
>>> 
>>>> On Nov 19, 2024, at 6:14 AM, MIGUEL MOLINOS PEREZ <mmoli...@us.es> wrote:
>>>> 
>>>> Dear all:
>>>> 
>>>> It seems that if I mesh a cubic domain with “DMDACreate3d” using 8 bricks 
>>>> for discretization and with periodic boundaries, each of the bricks has a 
>>>> different size. In contrast, if I use DM_BOUNDARY_NONE, all 8 bricks have 
>>>> the same size. 
>>>> 
>>>> I have used this together with the DMSWarm discretization. And as you can 
>>>> see the number of particles per rank is not evenly distributed: 
>>>> 210 420 366 732 420 840 732 1464
>>>> 
>>>> Am I missing something?
>>>> 
>>>> Thanks,
>>>> Miguel
>>>> 
>>>> 
>>>> <Screenshot 2024-11-19 at 10.56.36.png>
>>> 
>> 
> 
> <atoms-3D.cpp><Mg-hcp-cube-x17-x10-x10.dump>

Reply via email to