On 24 Apr 2025, at 6:08 PM, neil liu <liufi...@gmail.com> wrote:
Thanks a lot, Pierre. It works now. Another question is, with the present strategy, after the adapting, we will get a pseudo DM object, which has all information on rank 0 and nothing on all other ranks. Then I tried to use DMPlexdistribute to partition it and the partitioned DMs seem correct. Is it safe to do things like this?
I think this is sane, but if that triggers an error down the road, feel free to send a reproducer.
Thanks, Pierre
MMG only supports serial execution, whereas ParMMG supports parallel mode (although ParMMG is not as robust or mature as MMG).
Given this, could you please provide some guidance on how to handle this in the code?
Here are my current thoughts; please let know whether it could work as a temporary solution.
That could work, Pierre
We may only need to make minor modifications in the DMAdaptMetric_Mmg_Plex() subroutine. Specifically:
-
Allow all collective PETSc functions to run across all ranks as usual.
-
Restrict the MMG-specific logic to run only on rank 0, since MMG is serial-only.
-
Add a check before MMG is called to ensure that only rank 0 holds mesh cells, i.e., validate that cEnd - cStart > 0 only on rank 0. If more than one rank holds cells, raise a clear warning or error. If mmg does not support parallel communicators, we should handle it internally in the code, always use commself, and raise an error if there are two or more processes in the comm that have cEnd - cStart > 0
Thanks a lot. Pierre.Do you have any suggestions to build a real serial DM from this gatherDM? I tried several ways, which don't work. DMClone?
Thanks,
Thanks a lot, Stefano. I tried DMPlexGetGatherDM and DMPlexDistributeField. It can give what we expected. The final gatherDM is listed as follows, rank 0 has all information (which is right) while rank 1 has nothing. Then I tried to feed this gatherDM into adaptMMG on rank 0 only (it seems MMG works better than ParMMG, that is why I want MMG to be tried first). But it was stuck at collective petsc functions in DMAdaptMetric_Mmg_Plex(). By the way, the present work can work well with 1 rank.
Do you have any suggestions ? Build a real serial DM?
Yes, you need to change the underlying MPI_Comm as well, but I’m not sure if there is any user-facing API for doing this with a one-liner.
Thanks, Pierre Thanks a lot. Xiaodong
DM Object: Parallel Mesh 2 MPI processes type: plex Parallel Mesh in 3 dimensions: Number of 0-cells per rank: 56 0 Number of 1-cells per rank: 289 0 Number of 2-cells per rank: 452 0 Number of 3-cells per rank: 216 0 Labels: depth: 4 strata with value/size (0 (56), 1 (289), 2 (452), 3 (216)) celltype: 4 strata with value/size (0 (56), 1 (289), 3 (452), 6 (216)) Cell Sets: 2 strata with value/size (29 (152), 30 (64)) Face Sets: 3 strata with value/size (27 (8), 28 (40), 101 (20)) Edge Sets: 1 strata with value/size (10 (10)) Vertex Sets: 5 strata with value/size (27 (2), 28 (6), 29 (2), 101 (4), 106 (4)) Field Field_0: adjacency FEM
If you have a vector distributed on the original mesh, then you can use the SF returned by DMPlexGetGatherDM and use that in a call to DMPlexDistributeField
Dear PETSc developers and users, I am currently exploring the integration of MMG3D with PETSc. Since MMG3D supports only serial execution, I am planning to combine parallel and serial computing in my workflow. Specifically, after solving the linear systems in parallel using PETSc:
I intend to use DMPlexGetGatherDM to collect the entire mesh on the root process for input to MMG3D.
Additionally, I plan to gather the error field onto the root process using VecScatter .
However, I am concerned that the nth value in the gathered error vector (step 2) may not correspond to the nth element in the gathered mesh (step 1). Is this a valid concern? Do you have any suggestions or recommended practices for ensuring correct correspondence between the solution fields and the mesh when switching from parallel to serial mode?
Thanks,
Xiaodong
--
Stefano
--
Stefano
|