On 24 Apr 2025, at 6:08 PM, neil liu wrote:Thanks a lot, Pierre. It works now. Another question is, with the present strategy, after the adapting, we will get a pseudo DM object, which has all information on rank 0 and nothing on all other ranks. Then I tried to use DMPlexdistribute to partition
Thanks a lot, Pierre. It works now.
Another question is, with the present strategy, after the adapting, we will
get a pseudo DM object, which has all information on rank 0 and nothing on
all other ranks.
Then I tried to use DMPlexdistribute to partition it and the
partitioned DMs seem correct. Is i
On 23 Apr 2025, at 7:28 PM, neil liu wrote:MMG only supports serial execution, whereas ParMMG supports parallel mode (although ParMMG is not as robust or mature as MMG).
Given this, could you please provide some guidance on how to handle this in the code?
Here are my current thoughts; please let
*MMG only supports serial execution, whereas ParMMG supports parallel mode
(although ParMMG is not as robust or mature as MMG).*
Given this, could you please provide some guidance on how to handle this in
the code?
Here are my current thoughts; please let know whether it could work as
a temporary
If mmg does not support parallel communicators, we should handle it
internally in the code, always use commself, and raise an error if there
are two or more processes in the comm that have cEnd - cStart > 0
Il giorno mer 23 apr 2025 alle ore 20:05 neil liu ha
scritto:
> Thanks a lot. Pierre.
> D
Thanks a lot. Pierre.
Do you have any suggestions to build a real serial DM from this gatherDM?
I tried several ways, which don't work.
DMClone?
Thanks,
On Wed, Apr 23, 2025 at 11:39 AM Pierre Jolivet wrote:
>
>
> On 23 Apr 2025, at 5:31 PM, neil liu wrote:
>
> Thanks a lot, Stefano.
> I tried
Thanks a lot, Stefano.
I tried DMPlexGetGatherDM and DMPlexDistributeField. It can give what we
expected.
The final gatherDM is listed as follows, rank 0 has all information (which
is right) while rank 1 has nothing.
Then I tried to feed this gatherDM into adaptMMG on rank 0 only (it seems
MMG wor
> On 23 Apr 2025, at 5:31 PM, neil liu wrote:
>
> Thanks a lot, Stefano.
> I tried DMPlexGetGatherDM and DMPlexDistributeField. It can give what we
> expected.
> The final gatherDM is listed as follows, rank 0 has all information (which is
> right) while rank 1 has nothing.
> Then I tried
If you have a vector distributed on the original mesh, then you can use the
SF returned by DMPlexGetGatherDM and use that in a call to
DMPlexDistributeField
Il giorno ven 18 apr 2025 alle ore 17:02 neil liu ha
scritto:
> Dear PETSc developers and users,
>
> I am currently exploring the integrati
Dear PETSc developers and users,
I am currently exploring the integration of MMG3D with PETSc. Since MMG3D
supports only serial execution, I am planning to combine parallel and
serial computing in my workflow. Specifically, after solving the linear
systems in parallel using PETSc:
1.
I int
10 matches
Mail list logo