https://gcc.gnu.org/bugzilla/show_bug.cgi?id=118070

--- Comment #6 from Eric Botcazou <ebotcazou at gcc dot gnu.org> ---
> I don't see how it can work with all the _BitInts though, the _BitInt ABI
> depends on various things, which precisions are handled as normal scalar
> integral types or their bitfields, which are handled as arrays of limbs,
> what the limb size is, limb ordering, bit ordering within the limbs, etc.
> So, to support _BitInt with SSO, one would basically need to define a new
> ABI which would be a big endian (or little endian) counterpart of the normal
> _BitInt ABI.

SSO only applies to aggregate types though, not to _BitInts directly, so I
don't think that the _BitInt ABI really comes into play here.

> Plus all the libgcc _BitInt helper routines would need to have their SSO
> variants, otherwise one could handle (with lot of pain in bitint lowering)
> struct __attribute__((scalar_storage_order ("big-endian"))) S
> {
>  _BitInt(1023) a;
>} b;
> _BitInt(1023) c;
> 
> _BitInt(1023)
> foo (void)
> {
>  return b.a + c;
> }
> but not really
>  return b.a * c;
> (the addition is done without libgcc help, so one could do the b.a endian
> adjustment, but multiplication, floating <-> _BitInt casts, etc. are all done 
> > with libgcc help).

So the libgcc helper routines work directly on the in-memory representation of
_BitInts and not on their values (abstracted in some way)?  If so, yes,
supporting SSO would require a reverse-order variants of them (like the Ada
runtime has for reverse-order packed array types).

Reply via email to