https://gcc.gnu.org/bugzilla/show_bug.cgi?id=104194

Ulrich Weigand <uweigand at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |uweigand at gcc dot gnu.org

--- Comment #8 from Ulrich Weigand <uweigand at gcc dot gnu.org> ---
(In reply to Jakub Jelinek from comment #7)
> A temporary workaround now applied.

It turns out this workaround is not transparent to users of the debugger, for
example if you define a variable as
   long double x;
and then issue the "ptype x" command in GDB, you'll now get "_Float128" - which
is quite surprising if you've never even used that type in your source code. 
(This also causes a few GDB test suite failures.)

> The dwarf-discuss thread seems to prefer using separate DW_ATE_* values
> instead of DW_AT_precision/DW_AT_minimum_exponent, but hasn't converged yet.

When I discussed this back in 2017:
https://slideslive.com/38902369/precise-target-floatingpoint-emulation-in-gdb
(see page 16 in the slides), my suggestion was simple
  DW_AT_encoding_variant
which would have the let the details of the floating-point format remain
platform-defined (unspecified by DWARF), but simply allow a platform to define
multiple different formats of the same size if required.

Reply via email to