how well does gcc support type-specific pointer formats?
Does gcc allow backends to have a say in how pointers are represented (bits beyond the address), what happens in conversions between pointer types, and what happens in conversions between pointers and uintptr_t? The target in question has: - one pointer format and set of load/store instructions for pointers to int/long - another format and set of load/store instructions for pointers to char - pointers to short use a third format in general, but can use the int/long format IF you know which half of the word you're going to access What mechanisms, if any, are present in gcc to deal with this?
Re: how well does gcc support type-specific pointer formats?
On Wed, Sep 30, 2015 at 11:23 AM, Mikael Pettersson wrote: > Does gcc allow backends to have a say in how pointers are represented > (bits beyond the address), what happens in conversions between pointer > types, and what happens in conversions between pointers and uintptr_t? > > The target in question has: > - one pointer format and set of load/store instructions for pointers to > int/long > - another format and set of load/store instructions for pointers to char > - pointers to short use a third format in general, but can use the int/long > format IF you know which half of the word you're going to access > > What mechanisms, if any, are present in gcc to deal with this? Basically none. The only thing I could imagine you could use is have the pointer formats be different address-spaces. The target controls how to convert pointers from/to different address-spaces. I am not aware of specialities for pointer-to-int or int-to-pointer conversion though - IIRC they simply use bit-identical conversions (thus subregs if the modes differ). But who would design this kind of weird architecture and think he could get away with that easily? Richard.
Re: how well does gcc support type-specific pointer formats?
Richard Biener writes: > On Wed, Sep 30, 2015 at 11:23 AM, Mikael Pettersson > wrote: > > Does gcc allow backends to have a say in how pointers are represented > > (bits beyond the address), what happens in conversions between pointer > > types, and what happens in conversions between pointers and uintptr_t? > > > > The target in question has: > > - one pointer format and set of load/store instructions for pointers to > > int/long > > - another format and set of load/store instructions for pointers to char > > - pointers to short use a third format in general, but can use the int/long > > format IF you know which half of the word you're going to access > > > > What mechanisms, if any, are present in gcc to deal with this? > > Basically none. The only thing I could imagine you could use is have > the pointer > formats be different address-spaces. The target controls how to > convert pointers > from/to different address-spaces. I am not aware of specialities for > pointer-to-int > or int-to-pointer conversion though - IIRC they simply use > bit-identical conversions > (thus subregs if the modes differ). > > But who would design this kind of weird architecture and think he could get > away with that easily? > > Richard. It's an old mainframe architecture, not a new design. A company produced clones up until a few years ago. They also maintained a private gcc port based initially on gcc-3.2 and eventually on gcc-4.3, but were unable to rebase on gcc-4.4. I have access to that port, and am trying to figure out if it can be reimplemented in some sane say. /Mikael
Debugger support for __float128 type?
Hello, I've been looking into supporting __float128 in the debugger, since we're now introducing this type on PowerPC. Initially, I simply wanted to do whatever GDB does on Intel, but it turns out debugging __float128 doesn't work on Intel either ... The most obvious question is, how should the type be represented in DWARF debug info in the first place? Currently, GCC generates on i386: .uleb128 0x3# (DIE (0x2d) DW_TAG_base_type) .byte 0xc # DW_AT_byte_size .byte 0x4 # DW_AT_encoding .long .LASF0 # DW_AT_name: "long double" and .uleb128 0x3# (DIE (0x4c) DW_TAG_base_type) .byte 0x10# DW_AT_byte_size .byte 0x4 # DW_AT_encoding .long .LASF1 # DW_AT_name: "__float128" On x86_64, __float128 is encoded the same way, but long double is: .uleb128 0x3# (DIE (0x31) DW_TAG_base_type) .byte 0x10# DW_AT_byte_size .byte 0x4 # DW_AT_encoding .long .LASF0 # DW_AT_name: "long double" Now, GDB doesn't recognize __float128 on either platform, but on i386 it could at least in theory distinguish the two via DW_AT_byte_size. But on x86_64 (and also on powerpc), long double and __float128 have the identical DWARF encoding, except for the name. Looking at the current DWARF standard, it's not really clear how to make a distinction, either. The standard has no way to specifiy any particular floating-point format; the only attributes for a base type of DW_ATE_float encoding are related to the size. (For the Intel case, one option might be to represent the fact that for long double, there only 80 data bits and the rest is padding, via some combination of the DW_AT_bit_size and DW_AT_bit_offset or DW_AT_data_bit_offset attributes. But that wouldn't help for PowerPC since both long double and __float128 really use 128 data bits, just different encodings.) Some options might be: - Extend the official DWARF standard in some way - Use a private extension (e.g. from the platform-reserved DW_AT_encoding value range) - Have the debugger just hard-code a special case based on the __float128 name Am I missing something here? Any suggestions welcome ... B.t.w. is there interest in fixing this problem for Intel? I notice there is a GDB bug open on the issue, but nothing seems to have happened so far: https://sourceware.org/bugzilla/show_bug.cgi?id=14857 Bye, Ulrich -- Dr. Ulrich Weigand GNU/Linux compilers and toolchain ulrich.weig...@de.ibm.com
Re: Debugger support for __float128 type?
On Wed, 30 Sep 2015, Ulrich Weigand wrote: > - Extend the official DWARF standard in some way I think you should do this. Note that TS 18661-4 will be coming out very soon, and includes (optional) types * _FloatN, where N is 16, 32, 64 or >= 128 and a multiple of 32; * _DecimalN, where N >= 32 and a multiple of 32; * _Float32x, _Float64x, _Float128x, _Decimal64x, _Decimal128x so this is not simply a matter of supporting a GNU extension (not that it's simply a GNU extension on x86_64 anyway - __float128 is explicitly mentioned in the x86_64 ABI document), but of supporting an ISO C extension, in any case where one of the above types is the same size and radix as float / double / long double but has a different representation. (All the above are distinct types in C, and distinct from float, double, long double even if the representations are the same. But I don't think DWARF needs to distinguish e.g. float and _Float32 other than by their name - it's only the case of different representations that needs distinguishing. The _Float* and _Float*x types have corresponding complex types, but nothing further should be needed in DWARF for those once you can represent _Float*.) -- Joseph S. Myers jos...@codesourcery.com
gcc-4.9-20150930 is now available
Snapshot gcc-4.9-20150930 is now available on ftp://gcc.gnu.org/pub/gcc/snapshots/4.9-20150930/ and on various mirrors, see http://gcc.gnu.org/mirrors.html for details. This snapshot has been generated from the GCC 4.9 SVN branch with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-4_9-branch revision 228309 You'll find: gcc-4.9-20150930.tar.bz2 Complete GCC MD5=4db629791e4514e08d89a89cc896f5ce SHA1=d4601bbb1cf799f8b40554b174ecff36a1d22634 Diffs from 4.9-20150923 are available in the diffs/ subdirectory. When a particular snapshot is ready for public consumption the LATEST-4.9 link is updated and a message is sent to the gcc list. Please do not use a snapshot before it has been announced that way.
Re: Debugger support for __float128 type?
> Date: Wed, 30 Sep 2015 19:33:44 +0200 (CEST) > From: "Ulrich Weigand" > > Hello, > > I've been looking into supporting __float128 in the debugger, since we're > now introducing this type on PowerPC. Initially, I simply wanted to do > whatever GDB does on Intel, but it turns out debugging __float128 doesn't > work on Intel either ... > > The most obvious question is, how should the type be represented in > DWARF debug info in the first place? Currently, GCC generates on i386: > > .uleb128 0x3# (DIE (0x2d) DW_TAG_base_type) > .byte 0xc # DW_AT_byte_size > .byte 0x4 # DW_AT_encoding > .long .LASF0 # DW_AT_name: "long double" > > and > > .uleb128 0x3# (DIE (0x4c) DW_TAG_base_type) > .byte 0x10# DW_AT_byte_size > .byte 0x4 # DW_AT_encoding > .long .LASF1 # DW_AT_name: "__float128" > > On x86_64, __float128 is encoded the same way, but long double is: > > .uleb128 0x3# (DIE (0x31) DW_TAG_base_type) > .byte 0x10# DW_AT_byte_size > .byte 0x4 # DW_AT_encoding > .long .LASF0 # DW_AT_name: "long double" > > Now, GDB doesn't recognize __float128 on either platform, but on i386 > it could at least in theory distinguish the two via DW_AT_byte_size. > > But on x86_64 (and also on powerpc), long double and __float128 have > the identical DWARF encoding, except for the name. > > Looking at the current DWARF standard, it's not really clear how to > make a distinction, either. The standard has no way to specifiy any > particular floating-point format; the only attributes for a base type > of DW_ATE_float encoding are related to the size. > > (For the Intel case, one option might be to represent the fact that > for long double, there only 80 data bits and the rest is padding, via > some combination of the DW_AT_bit_size and DW_AT_bit_offset or > DW_AT_data_bit_offset attributes. But that wouldn't help for PowerPC > since both long double and __float128 really use 128 data bits, > just different encodings.) > > Some options might be: > > - Extend the official DWARF standard in some way > > - Use a private extension (e.g. from the platform-reserved > DW_AT_encoding value range) > > - Have the debugger just hard-code a special case based > on the __float128 name > > Am I missing something here? Any suggestions welcome ... > > B.t.w. is there interest in fixing this problem for Intel? I notice > there is a GDB bug open on the issue, but nothing seems to have happened > so far: https://sourceware.org/bugzilla/show_bug.cgi?id=14857 Perhaps you should start with explaining what __float128 actually is on your specific platform? And what long double actually is. I'm guessing long double is a what we sometimes call an IBM long double, which is essentially two IEEE double-precision floating point numbers packed together and that __float128 is an attempt to fix history and have a proper IEEE quad-precision floating point type ;). And that __float128 isn't actually implemented in hardware. I fear that the idea that it is possible to determine the floating point type purely from the size is fairly deeply engrained into the GDB code base. Fixing this won't be easy. The easiest thing to do would probably be to define a separate ABI where long double is IEEE quad-precision. But the horse is probably already out of the barn on that one... Making the decision based on the name is probably the easiest thing to do. Butq keep in mind that other OSes that currently don't support IBM long doubles and where long double is the same as double, may want to define long double to be IEEE quad-precision floating point on powerpc. The reason people haven't bothered to fix this, is probably because nobody actually implements quad-precision floating point in hardware. And software implementations are so slow that people don't really use them unless they need to. Like I did to nomerically calculate some asymptotic expansions for my Thesis work...