================
@@ -3324,6 +3359,20 @@ llvm::Constant 
*CodeGenFunction::EmitCheckTypeDescriptor(QualType T) {
       DiagnosticsEngine::ak_qualtype, (intptr_t)T.getAsOpaquePtr(), 
StringRef(),
       StringRef(), std::nullopt, Buffer, std::nullopt);
 
+  if (IsBitInt) {
+    // The Structure is: 0 to end the string, 32 bit unsigned integer in target
+    // endianness, zero.
+    char S[6] = {'\0', '\0', '\0', '\0', '\0', '\0'};
----------------
AdamMagierFOSS wrote:

Probably a bit too late to the party and I struggle with following discussion 
history on GitHub, but what was the reasoning behind storing the bit width 
encoded in the name of the type versus parsing out the bit width from the name 
of the type itself in the runtime (e.g. parsing out the value 37 from 
'_BitInt(37)' instead of extracting the literal 37 from 
'_BitInt(37)\x00\x25\x00\x00\x00' )? I would imagine that the former would be 
easier since there wouldn't be any platform dependence when traversing the 
string, and I don't think it would incur too much of an extra performance hit 
(the solution might look a bit kludgey though)

https://github.com/llvm/llvm-project/pull/96240
_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to