Thank you Matt for your time,

What you describe seems to me the ideal approach.

1) Add a particle field 'ghost' that identifies ghost vs owned particles. I 
think it needs options OWNED, OVERLAP, and GHOST
This means, locally, I need to allocate Nlocal + ghost particles (duplicated) 
for my model? If that so, how to do the communication between the ghost 
particles living in the rank i and their “real” counterpart in the rank j.

Algo, as an alternative, what about:
1) Use an IS tag which contains, for each rank, a list of the global index of 
the neighbors particles outside of the rank.
2) Use VecCreateGhost to create a new vector which contains extra local space 
for the ghost components of the vector.
3) Use VecScatterCreate, VecScatterBegin, and VecScatterEnd to do the 
transference of data between a vector obtained with 
DMSwarmCreateGlobalVectorFromField
4) Do necessary computations using the vectors created with VecCreateGhost.

Thanks,
Miguel

On Aug 2, 2024, at 8:58 AM, Matthew Knepley <knep...@gmail.com> wrote:

On Thu, Aug 1, 2024 at 4:40 PM MIGUEL MOLINOS PEREZ 
<mmoli...@us.es<mailto:mmoli...@us.es>> wrote:
Dear all, I am implementing a Molecular Dynamics (MD) code using the DMSWARM 
interface. In the MD simulations we evaluate on each particle (atoms) some kind 
of scalar functional using data from the neighbouring atoms. My problem lies in 
the
ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This message came from outside your organization.

ZjQcmQRYFpfptBannerEnd

Dear all,

I am implementing a Molecular Dynamics (MD) code using the DMSWARM interface. 
In the MD simulations we evaluate on each particle (atoms) some kind of scalar 
functional using data from the neighbouring atoms. My problem lies in the 
parallel implementation of the model, because sometimes, some of these 
neighbours lie on a different processor.

This is usually solved by using ghost particles.  A similar approach (with 
nodes instead) is already implemented for other PETSc mesh structures like 
DMPlexConstructGhostCells. Unfortunately, I don't see this kind of constructs 
for DMSWARM. Am I missing something?

I this could be done by applying a buffer region by exploiting the background 
DMDA mesh that I already use to do domain decomposition. Then using the buffer 
region of each cell to locate the ghost particles and finally using 
VecCreateGhost. Is this feasible? Or is there an easier approach using other 
PETSc functions.

This is feasible, but it would be good to develop a set of best practices, 
since we have been mainly focused on the case of non-redundant particles. Here 
is how I think I would do what you want.

1) Add a particle field 'ghost' that identifies ghost vs owned particles. I 
think it needs options OWNED, OVERLAP, and GHOST

2) At some interval identify particles that should be sent to other processes 
as ghosts. I would call these "overlap particles". The determination
    seems application specific, so I would leave this determination to the user 
right now. We do two things to these particles

    a) Mark chosen particles as OVERLAP

    b) Change rank to process we are sending to

3) Call DMSwarmMigrate with PETSC_FALSE for the particle deletion flag

4) Mark OVERLAP particles as GHOST when they arrive

There is one problem in the above algorithm. It does not allow sending 
particles to multiple ranks. We would have to do this
in phases right now, or make a small adjustment to the interface allowing 
replication of particles when a set of ranks is specified.

  THanks,

     Matt


Thank you,
Miguel




--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bwMUVbfsEDURwiD6tV7_-3EXq7Aogacpt43DZLysMRG2mTWcoK-ax5Ad2xtFGWdBZWNR_QnyvEOYuHqbu4PhgA$
 
<https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!bwMUVbfsEDURwiD6tV7_-3EXq7Aogacpt43DZLysMRG2mTWcoK-ax5Ad2xtFGWdBZWNR_QnyvEOYuHqEu2Czbw$
 >

Reply via email to