https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114270

Jakub Jelinek <jakub at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |jakub at gcc dot gnu.org

--- Comment #3 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
(In reply to Andrew Pinski from comment #1)
> The rules for this to be done are a bit more complex than what is described
> here.
> 
> 1) Significand precision of the floating point type needs to be >= precision
> of the integer type

I'd also verify that minimum/maximum of the integer type are exactly
representable in the floating point type, such that even limitations on
exponent don't stand in a way.

Reply via email to