https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102989
Jakub Jelinek <jakub at gcc dot gnu.org> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |rguenth at gcc dot gnu.org, | |rsandifo at gcc dot gnu.org --- Comment #23 from Jakub Jelinek <jakub at gcc dot gnu.org> --- Seems LLVM currently only supports _BitInt up to 128, which is kind of useless for users, those sizes can be easily handled as bitfields and performing normal arithmetics on them. As for implementation, I'd like to brainstorm about it a little bit. I'd say we want a new tree code for it, say BITINT_TYPE. TYPE_PRECISION unfortunately is only 10-bit, that is not enough, so it would need the full precision to be specified somewhere else. And have targetm specify the ABI details (size of a limb (which would need to be exposed to libgcc with -fbuilding-libgcc), unless it is everywhere the same whether the limbs are least significant to most significant or vice versa, and whether the highest limb is sign/zero extended or unspecified beyond the precision. We'll need to handle the wide constants somehow, but we have a problem with wide ints that widest_int is not wide enough to handle arbitrarily long constants. Shall the type be a GIMPLE reg type? I assume for _BitInt <= 128 (or when TImode isn't supported <= 64) we just want to keep the new type on the function parameter/return value boundaries and use INTEGER_TYPEs from say gimplification. What about the large ones? Say for arbitrary size generic vectors we keep them in SSA form until late (generic vector lowering) and at that point lower, perhaps we could do the same for _BitInt? The unary as well as most of binary operations can be handled by simple loops over extraction of limbs from the large number, then there is multiplication and division/modulo. I think the latter is why LLVM restricts it to 128 bits right now, https://gcc.gnu.org/pipermail/gcc/2022-May/thread.html#238657 was an proposal from the LLVM side but I don't see it being actually further developed and don't see it on LLVM trunk. I wonder if for these libgcc APIs (and, is just __divmod/__udivmod enough, or do we want also multiplication, or for -Os purposes also other APIs?) it wouldn't be better to have more GMP/mpn like APIs where we don't specify number of limbs like in the above thread, but number of bits and perhaps don't specify it just for one argument but for multiple, so that we can then for the lowering match sign/zero extensions of the arguments and can handle say _BitInt(2048) / _BitInt(16) efficiently. Thoughts on this?