https://gcc.gnu.org/bugzilla/show_bug.cgi?id=122922

Nathaniel Shead <nshead at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
            Summary|[16 Regression] Valid code  |Valid code rejected because
                   |rejected because of         |of interaction of friend
                   |interaction of friend       |function and global module
                   |function and global module  |fragment
                   |fragment                    |
      Known to fail|                            |14.3.0, 15.2.0
      Known to work|14.3.0, 15.2.0              |

--- Comment #5 from Nathaniel Shead <nshead at gcc dot gnu.org> ---
Reduced further again.  This reduced version fails also with 14.3 and 15.2, so
I'm going to say not actually a regression.

  // a.cpp
  export module M;
  export extern "C++" template <typename> struct allocator {
    friend void operator==(allocator, allocator) {}
  };


  // b.cpp
  export module VF:Tensor;
  import M;

  template<int r> struct TTensor {
    allocator<int> a;
    TTensor<r - 1> TensorArr[1];
  };
  template<> struct TTensor<0> {};


  // c.cpp
  export module VF;
  export import :Tensor;

  void TCara(allocator<TTensor<1>>) {}
  export template<int r> void TCampo() {
    TCara(allocator<TTensor<1>>{});
    TTensor<r>();
  }


  // d.cpp
  import VF;
  int main() {
    TCampo<1>();
  }


I think it is possible to still reduce even further.

The root cause appears to be the array type; the friend declaration brings
TTensor<0> and TTensor<1> into the same cluster, and we have no intra-cluster
ordering that ensures TTensor<0>'s definition is streamed before TTensor<1>'s
definition is.

In this particular case we happen to stream TTensor<1>'s definition first,
which builds an array type of TTensor<0>.  At this point TTensor<0>'s
definition hasn't been streamed, so the array is considered to be an array of
incomplete type.  Later we do stream TTensor<0>'s definition, but we don't
update the TYPE_SIZE etc. of the array type we built earlier so
build_value_init thinks we still have incomplete type and errors.

Some possible approaches:

- Add a dependency ordering between structs that have a field that's a
non-dependent array type of a different struct in the same cluster, so that the
latter is always streamed first.  We shouldn't see cycles because we cannot
have two structs with arrays of each other.

- Have some post-processing for arrays of incomplete type during module
streaming; once we've finished reading the cluster we can loop through those
array types and attempt to complete them.

- Add more calls to 'complete_type' when processing structure fields, rather
than assuming that if we have a complete record type all its fields must also
have been completed already.  This seems error-prone though, as we may miss
cases.  (Unless we replace 'COMPLETE_TYPE_P' entirely in the C++ frontend with
a function that attempts to complete the type and returns false if it failed?)

I'm not immediately sure what the best approach of these might be, or if
there's another option.

As an aside, it would also be nice to try to find a way to make friend
functions less likely to bring a huge number of disparate types into the same
cluster, as larger clusters makes lazy loading a lot less efficient.

Reply via email to