https://gcc.gnu.org/bugzilla/show_bug.cgi?id=98384
Jakub Jelinek <jakub at gcc dot gnu.org> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |jakub at gcc dot gnu.org --- Comment #9 from Jakub Jelinek <jakub at gcc dot gnu.org> --- I think glibc %a printing uses 0x1.xxxx (for normalized values) at least for float/double and the IEEE quad long doubles, but uses 0xf.xxxx printing etc. for the 80-bit long doubles. My personal preference would be to always use 0x1.xxxx for normalized numbers and for denormals 0x0.xxxx, I think it is less surprising to users, and transforming one form to another is pretty easy. And agree on the tests just trying to parse the returned string back to see if it is the original value.