aaron.ballman added a comment.

In D133668#3783090 <https://reviews.llvm.org/D133668#3783090>, @beanz wrote:

> HLSL deviates from C here. HLSL doesn't actually have `short` (although I'm 
> actually not sure we should disable it in Clang). We do have `int16_t`, but 
> we don't promote `int16_t` to `int`. We discussed changing codegen to disable 
> promotion for HLSL, but it seemed more straightforward to me to just define 
> `int16_t` as `_BitInt(16)`.

Okay, that's good to know! If you don't intend to *ever* conform to the 
standard in this area, then I think this approach is very reasonable. But you 
should know up front that you're constraining yourself here. (Changing the 
underlying type in the future is an ABI break: https://godbolt.org/z/P6ndrzMab, 
note the name mangling.)



================
Comment at: clang/lib/Basic/Targets/DirectX.h:66
 
+  bool hasBitIntType() const override { return true; }
   bool hasFeature(StringRef Feature) const override {
----------------
This change requires more testing/thought, IMO -- do you support 128-bit 
operations? When we bump that limit to be arbitrarily high, should DX have the 
arbitrary limits or do you need to enforce something lower? Have you considered 
how you want to pack these into structures or other data layout considerations?


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D133668/new/

https://reviews.llvm.org/D133668

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to