================
@@ -103,6 +103,13 @@ class TypeDescriptor {
     /// representation is that of bitcasting the floating-point value to an
     /// integer type.
     TK_Float = 0x0001,
+    /// An _BitInt(N) type. Lowest bit is 1 for a signed value, 0 for an
+    /// unsigned value. Remaining bits are log_2(bit_width). The value
----------------
earnol wrote:

It's interesting problem. I have not changed the way _BitInts are encoded in 
the memory. My understanding is that might not be right approach to have too 
many meanings embedded into one field depending on the context. Please note we 
have only 15 bits here (lowest bit is a sign). Standard requires us to support 
N up to BITINT_MAXWIDTH which is quite high.
The standard says: https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2709.pdf
_The macro BITINT_MAXWIDTH represents the maximum width N supported in the 
declaration
of a bit-precise integer type (6.2.5) in the type specifier _BitInt(N). The 
value
BITINT_MAXWIDTH shall expand to a value that is greater than or equal to the 
value of
ULLONG_WIDTH._

It means it will not fit :(
Also why we do need endianess? Cannot we always use endianess of the target 
platform? We always know it at the moment of code generation.


https://github.com/llvm/llvm-project/pull/96240
_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to