Le 27/07/2025 à 17:35, Andre Vehreschild a écrit :


On Sun, 27 Jul 2025 16:57:14 +0200
Mikael Morin <morin-mik...@orange.fr> wrote:

Le 27/07/2025 à 12:57, Andre Vehreschild a écrit :
Hi Mikael,
In this example, image 1, i.e., for
Opencoarrays a thread on image one takes the data from the executing
image and writes it into the memory of image 1.
When you say it takes data, do you mean it takes the assignment right
hand side (named "data"), or do you mean that it takes all required data
(right hand side "data" and index value initialized with the result of
"get_val()") from the executing image?

Both! Always keep in mind that an expression like

res(this_image())[1] = 42

executed on image 2, manipulates memory of process/image 1. When those
images are not running on the same machine (like in MPI possible), then the
(evaluated) index, here this_image(), and the evaluated rhs need to be
send to image 1 like in this example. On image 1 a routine is called,
that looks like this (pseudo C, abbreviated):

void caf_accessor_res_1(struct array_integer_t &res, void *rhs, struct
add_data_t *add_data) {

    int *int_rhs = (int *)rhs;
    res.data[add_data->this_image_val) = *int_rhs;
}

The above routine is generated by the Fortran compiler from a gfc_code
structure that models it in Fortran. I went that way to have exactly the
assignment behavior of Fortran. This way assigning res(1:N)[...] = rhs(1:N)
does no trigger N communication for assigning scalars, but the vector is
send as a block and the loop to modify the data in res is done in the
accessor (significantly faster).

This routine is executed on the remote image, here image 1. Note, that it
lacks the coindex now, because that is the implementation of coindexing.
For brevity I left out all the boilerplate that is implemented in
OpenCoarrays.

If I rephrase the above on my example:
https://gcc.gnu.org/pipermail/fortran/2025-July/062591.html
with the assignment:
    res(get_val())[1] = data

every image <n> does:
    rhs_n = data;
    idx_n = get_val();

and the image <1> does:
    for each n:
       res(idx_n) = rhs_n;

Do you confirm?

Absolutely confirmed!


And now if I come back to your patch:
https://gcc.gnu.org/pipermail/fortran/2025-July/062530.html
the behaviour before the patch was different;

every image <n> was doing:
    rhs_n = data;

and the image <1> was doing:
    for each n:
      res(get_val()) = rhs_n;

because get_val() is pure and takes zero argument.

Still confirming?

And again confirmed. You got it!

Great, I'm starting to emerge from the mist.

Then I think the patch goes in the right direction.
Is it possible to add a testcase for the testsuite?

Should the case EXPR_COMPCALL be handled in a similar way to EXPR_FUNCTION (it can probably wait for a later patch)?

Harald, are you still unconvinced? Do we need to discuss the behavior of the testcase test_teams_1? or something else?

Regarding the comparison between teams and coarrays and MPI concepts, I think it's difficult to map between them because fortran doesn't define any communication, it just makes the coarrays world-accessible between images without any more details. With MPI on the contrary communication is explicit. So there are probably many MPI concepts that just don't exist with coarrays.

Reply via email to