https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100917

--- Comment #6 from sandra at gcc dot gnu.org ---
I've been thinking some more about this issue.  It seems to me that a "proper"
solution is either (1) Add a kind field to the GFC descriptor or (2) Do away
with GFC descriptors and use the C descriptor layout and encodings everywhere. 
Both of these break ABI.  (2) is more work to implement but more maintainable
long term, also more efficient at runtime.

To be more specific about the problem here....  on x86 targets, real(kind=10)
in Fortran or long double in C is an 80-bit representation.  On 32-bit
processors it has size 12 but on 64-bit processors it needs to be padded to 16
bytes.  Thus, if we only have elem_len=16 in the descriptor, we can't tell if
it is the 80-bit representation or the true 128-bit _float128 representation
when that is also available.

For a workaround that does not change the ABI, the best ideas I've had are
either mapping CFI_type_long_double and CFI_type_float128 to the same value on
targets where the two types both have size 16, or one of them to a negative
value to indicate it's not interoperable on that target.  In practice it's
probably rare for user code to have to use the typecode to distinguish between
floating-point formats -- you define your C function to accept the type of
arguments you intend to pass into it.

Anyway, a workaround for the floating-point stuff still leaves the character
size vs character length problem unaccounted for.

Reply via email to